skip to content
Relatively General .NET

.NET Aspire 9.1 is here with six great new dashboard features, and more!

by Maddy Montaquila

posted on: February 25, 2025

.NET Aspire 9.1 is here! From enhanced dashboard capabilities like Resource Relationships and Localization Overrides to improved Docker integration and flexible console logs, this release is packed with tools to streamline your development process.

How does the compiler infer the type of default?

by Gérald Barré

posted on: February 24, 2025

The default keyword is a powerful tool in C#. For reference types, it returns null, and for value types (structs), it returns a zeroed-out value. Interestingly, default(T) and new T() can yield different results for structs (check out Get the default value of a type at runtime for more info). Initi

.NET MAUI Performance Features in .NET 9

by Jonathan,Simon

posted on: February 20, 2025

Optimize .NET MAUI application size and startup times with trimming and NativeAOT. Learn about `dotnet-trace` and `dotnet-gcdump` for measuring performance.

RavenDB 7.1

by Oren Eini

posted on: February 19, 2025

I have been delaying the discussion about the performance numbers for a reason. Once we did all the work that I described in the previous posts, we put it to an actual test and ran it against the benchmark suite. In particular, we were interested in the following scenario:High insert rate, with about 100 indexes active at the same time. Target: Higher requests / second, lowered latencyPreviously, after hitting a tipping point, we would settle at under 90 requests/second and latency spikes that hit over 700(!) ms. That was the trigger for much of this work, after all. The initial results were quite promising, we were showing massive improvement across the board, with over 300 requests/second and latency peaks of 350 ms.On the one hand, that is a really amazing boost, over 300% improvement. On the other hand, just 300 requests/second - I hoped for much higher numbers. When we started looking into exactly what was going on, it became clear that I seriously messed up.Under load, RavenDB would issue fsync() at a rate of over 200/sec. That is… a lot, and it means that we are seriously limited in the amount of work we can do. That is weird since we worked so hard on reducing the amount of work the disk has to do. Looking deeper, it turned out to be an interesting combination of issues.Whenever Voron changes the active journal file, we’ll register the new journal number in the header file, which requires an fsync() call. Because we are using shared journals, the writes from both the database and all the indexes go to the same file, filling it up very quickly. That meant we were creating new journal files at a rate of more than one per second. That is quite a rate for creating new journal files, mostly resulting from the sheer number of writes that we funnel into a single file. The catch here is that on each journal file creation, we need to register each one of the environments that share the journal. In this case, we have over a hundred environments participating, and we need to update the header file for each environment. With the rate of churn that we have with the new shared journal mode, that alone increases the number of fsync() generated.It gets more annoying when you realize that in order to actually share the journal, we need to create a hard link between the environments. On Linux, for example, we need to write the following code:bool create_hard_link_durably(const char *src, const char *dest) { if (link(src, dest) == -1) return false; int dirfd = open(dest, O_DIRECTORY); if (dirfd == -1) return false; int rc = fsync(dirfd); close(dirfd); return rc != -1; }You need to make 4 system calls to do this properly, and most crucially, one of them is fsync() on the destination directory. This is required because the man page states:Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk.  For that an explicit fsync() on a file descriptor for the directory is also needed.Shared journals mode requires that we link the journal file when we record a transaction from that environment to the shared journal. In our benchmark scenario, that means that each second, we’ll write from each environment to each journal. We need an fsync() for the directory of each environment per journal, and in total, we get to over 200 fsync() per journal file, which we replace at a rate of more than one per second.Doing nothing as fast as possible…Even with this cost, we are still 3 times faster than before, which is great, but I think we can do better. In order to be able to do that, we need to figure out a way to reduce the number of fsync() calls being generated. The first task to handle is updating the file header whenever we create a new journal. We are already calling fsync() on the directory when we create the journal file, so we ensure that the file is properly persisted in the directory. There is no need to also record it in the header file. Instead, we can just use the directory listing to handle this scenario. That change alone saved us about 100 fsync() / second.The second problem is with the hard links, we need to make sure that these are persisted. But calling fsync() for each one is cost-prohibitive. Luckily, we already have a transactional journal, and we can re-use that. As part of committing a set of transactions to the shared journal, we’ll also record an entry in the journal with the associated linked paths. That means we can skip calling fsync() after creating the hard link, since if we run into a hard crash, we can recover the linked journals during the journal recovery. That allows us to skip the other 100 fsync() / second.Another action we can take to reduce costs is to increase the size of the journal files. Since we are writing entries from both the database and indexes to the same file, we are going through them a lot faster now, so increasing the default maximum size will allow us to amortize the new file costs across more transactions.The devil is in the detailsThe idea of delaying the fsync() of the parent directory of the linked journals until recovery is a really great one because we usually don’t recover. That means we delay the cost indefinitely, after all. Recovery is rare, and adding a few milliseconds to the recovery time is usually not meaningful. However… There is a problem when you look at this closely. The whole idea behind shared journals is that transactions from multiple storage environments are written to a single file, which is hard-linked to multiple directories. Once written, each storage environment deals with the journal files independently. That means that if the root environment is done with a journal file and deletes it before the journal file hard link was properly persisted to disk and there was a hard crash… on recovery, we won’t have a journal to re-create the hard links, and the branch environments will be in an invalid state.That is a set of circumstances that is going to be unlikely, but that is something that we have to prevent, nevertheless. The solution for that is to keep track of all the journal directories, and whenever we are about to delete a journal file, we’ll sync all the associated journal directories. The key here is that when we do that, we keep track of the current journal written to that directory. Instead of having to run fsync() for each directory per journal file, we can amortize this cost. Because we delayed the actual syncing, we have time to create more journal files, so calling fsync() on the directory ensures that multiple files are properly persisted.We still have to sync the directories, but at least it is not to the tune of hundreds of times per second.Performance resultsAfter making all of those changes, we run the benchmark again. We are looking at over 500 req/sec and latency peaks that hover around 100 ms under load. As a reminder, that is almost twice as much as the improved version (and a hugeimprovement in latency).If we compare this to the initial result, we increased the number of requests per second by over 500%.But I have some more ideas about that, I’ll discuss them in the next post…

RavenDB 7.1

by Oren Eini

posted on: February 17, 2025

I wrote before about a surprising benchmark that we ran to discover the limitations of modern I/O systems. Modern disks such as NVMe have impressive capacity and amazing performance for everyday usage. When it comes to the sort of activities that a database engine is interested in, the situation is quite different.At the end of the day, a transactional database cares a lot about actually persisting the data to disk safely. The usual metrics we have for disk benchmarks are all about buffered writes, that is why we run our own benchmark. The results were really interesting (see the post), basically, it feels like there is a bottleneck writing to the disk. The bottleneck is with the number of writes, not how big they are.If you are issuing a lot of small writes, your writes will contend on that bottleneck and you’ll see throughput that is slow. The easiest way to think about it is to consider a bus carrying 50 people at once versus 50 cars with one person each. The same road would be able to transport a lot more people with the bus rather than with individual cars, even though the bus is heavier and (theoretically, at least) slower.Databases & StoragesIn this post, I’m using the term Storage to refer to a particular folder on disk, which is its own self-contained storage with its own ACID transactions. A RavenDB database is composed of many such Storages that are cooperating together behind the scenes.The I/O behavior we observed is very interesting for RavenDB. The way RavenDB is built is that a single database is actually made up of a central Storage for the data, and separate Storages for each of the indexes you have. That allows us to do split work between different cores, parallelize work, and most importantly, benefit from batching changes to the indexes. The downside of that is that a single transaction doesn’t cause a single write to the disk but multiple writes. Our test case is the absolute worst-case scenario for the disk, we are making a lot of individual writes to a lot of documents, and there are about 100 indexes active on the database in question. In other words, at any given point in time, there are many (concurrent) outstanding writes to the disk. We don’t actually care about most of those writers, mind you. The only writer that matters (and is on the critical path) is the database one. All the others are meant to complete in an asynchronous manner and, under load, will actually perform better if they stall (since they can benefit from batching).The problem is that we are suffering from this situation. In this situation, the writes that the user is actually waiting for are basically stuck in traffic behind all the lower-priority writes. That is quite annoying, I have to say. The role of the Journal in VoronThe Write Ahead Journal in Voron is responsible for ensuring that your transactions are durable.  I wrote about it extensively in the past (in fact, I would recommend the whole series detailing the internals of Voron). In short, whenever a transaction is committed, Voron writes that to the journal file using unbuffered I/O. Remember that the database and each index are running their own separate storages, each of which can commit completely independently of the others. Under load, all of them may issue unbuffered writes at the same time, leading to congestion and our current problem.During normal operations, Voron writes to the journal, waits to flush the data to disk, and then deletes the journal. They are never actually read except during startup. So all the I/O here is just to verify that, on recovery, we can trust that we won’t lose any data.The fact that we have many independent writers that aren’t coordinating with one another is an absolute killer for our performance in this scenario. We need to find a way to solve this, but how?One option is to have both indexes and the actual document in the same storage. That would mean that we have a single journal and a single writer, which is great. However, Voron has a single writer model, and for very good reasons. We want to be able to process indexes in parallel and in batches, so that was a non-starter.The second option was to not write to the journal in a durable manner for indexes. That sounds… insane for a transactional database, right? But there is logic to this madness. RavenDB doesn’t actually need its indexes to be transactional, as long as they are consistent, we are fine with “losing” transactions (for indexes only, mind!). The reasoning behind that is that we can re-index from the documents themselves (who would be writing in a durable manner). We actively considered that option for a while, but it turned out that if we don’t have a durable journal, that makes it a lot more difficult to recover. We can’t rely on the data on disk to be consistent, and we don’t have a known good source to recover from. Re-indexing a lot of data can also be pretty expensive. In short, that was an attractive option from a distance, but the more we looked into it, the more complex it turned out to be. The final option was to merge the journals. Instead of each index writing to its own journal, we could write to a single shared journal at the database level. The problem was that if we did individual writes, we would be right back in the same spot, now on a single file rather than many. But our tests show that this doesn’t actually matter. Luckily, we are well-practiced in the notion of transaction merging, so this is exactly what we did. Each storage inside a database is completely independent and can carry on without needing to synchronize with any other. We defined the following model for the database:Root Storage: DocumentsBranch: Index - Users/SearchBranch: Index - Users/LastSuccessfulLoginBranch: Index - Users/ActivityThis root & branch model is a simple hierarchy, with the documents storage serving as the root and the indexes as branches. Whenever an index completes a transaction, it will prepare the journal entry to be written, but instead of writing the entry to its own journal, it will pass the entry to the root. The root (the actual database, I remind you) will be able to aggregate the journal entries from its own transaction as well as all the indexes and write them to the disk in a single system call. Going back to the bus analogy, instead of each index going to the disk using its own car, they all go on the bus together.We now write all the entries from multiple storages into the same journal, which means that we have to distinguish between the different entries.  I wrote a bit about the challenges involved there, but we got it done.The end result is that we now have journal writes merging for all the indexes of a particular database, which for large databases can reduce the total number of disk writes significantly. Remember our findings from earlier, bigger writes are just fine, and the disk can process them at GB/s rate. It is the number of individual writes that matters most here.Writing is not even half the job, recovery (read) in a shared worldThe really tough challenge here wasn’t how to deal with the write side for this feature. Journals are never read during normal operations. Usually we only ever write to them, and they keep a persistent record of transactions until we flush all changes to the data file, at which point we can delete them.It is only when the Storage starts that we need to read from the journals, to recover all transactions that were committed to them. As you can imagine, even though this is a rare occurrence, it is one of critical importance for a database. This change means that we direct all the transactions from both the indexes and the database into a single journal file. Usually, each Storage environment has its own Journals/ directory that stores its journal files. On startup, it will read through all those files and recover the data file. How does it work in the shared journal model? For a root storage (the database), nothing much changes. We need to take into account that the journal files may contain transactions from a different storage, such as an index, but that is easy enough to filter. What about branch storage (an index) recovery? Well, it can probably just read the Journals/ directory of the root (the database), no?Well, maybe. Here are some problems with this approach. How do we encode the relationship between root & branch? Do we store a relative path, or an absolute path? We could of course just always use the root’s Journals/ directory, but that is problematic. It means that we could only open the branch storage if we already have the root storage open. Accepting this limitation means adding a new wrinkle into the system that currently doesn’t exist.It is highly desirable (for many reasons) to want to be able to work with just a single environment. For example, for troubleshooting a particular index, we may want to load it in isolation from its database. Losing that ability, which ties a branch storage to its root, is not something we want.The current state, by the way, in which each storage is a self-contained folder, is pretty good for us. Because we can play certain tricks. For example, we can stop a database, rename an index folder, and start it up again. The index would be effectively re-created. Then we can stop the database and rename the folders again, going back to the previous state. That is not possible if we tie all their lifetimes together with the same journal file.Additional complexity is not welcome in this projectBuilding a database is complicated enough, adding additional levels of complexity is a Bad Idea. Adding additional complexity to the recovery process (which by its very nature is both critical and rarely executed) is a Really Bad Idea.I started laying out the details about what this feature entails:A database cannot delete its journal files until all the indexes have synced their state. What happens if an index is disabled by the user?What happens if an index is in an error state?How do you manage the state across snapshot & restore?There is a well-known optimization for databases in which we split the data file and the journal files into separate volumes. How is that going to work in this model?Putting the database and indexes on separate volumes altogether is also a well-known optimization technique. Is that still possible?How do we migrate from legacy databases to the new shared journal model? I started to provide answers for all of these questions… I’ll spare you the flow chart that was involved, it looked something between abstract art and the remains of a high school party. The problem is that at a very deep level, a Voron Storage is meant to be its own independent entity, and we should be able to deal with it as such. For example, RavenDB has a feature called Side-by-Side indexes, which allows us to have two versions of an index at the same time. When both the old and new versions are fully caught up, we can shut down both indexes, delete the old one, and rename the new index with the old one’s path. A single shared journal would have to deal with this scenario explicitly, as well as many other different ones that all made such assumptions about the way the system works.Not my monkeys, not my circus, not my problemI got a great idea about how to dramatically simplify the task when I realized that a core tenet of Voron and RavenDB in general is that we should not go against the grain and add complexity to our lives. In the same way that Voron uses memory-mapped files and carefully designed its data access patterns to take advantage of the kernel’s heuristics. The idea is simple, instead of having a single shared journal that is managed by the database (the root storage) and that we need to access from the indexes (the branch storages), we’ll have a single shared journal with many references.The idea is that instead of having a single journal file, we’ll take advantage of an interesting feature: hard links. A hard link is just a way to associate the same file data with multiple file names, which can reside in separate directories. A hard link is limited to files running on the same volume, and the easiest way to think about them is as pointers to the file data. Usually, we make no distinction between the file name and the file itself, but at the file system level, we can attach a single file to multiple names. The file system will manage the reference counts for the file, and when the last reference to the file is removed, the file system will delete the file. The idea is that we’ll keep the same Journals/ directory structure as before, where each Storage has its own directory. But instead of having separate journals for each index and the database, we’ll have hard links between them. You can see how it will look like here, the numbers next to the file names are the inode numbers, you can see that there are multiple such files with the same inode number (indicating that there are multiple links to the same underlying file).. └── [ 40956] my.shop.db ├── [ 40655] Indexes │ ├── [ 40968] Users_ByName │ │ └── [ 40970] Journals │ │ ├── [ 80120] 0002.journal │ │ └── [ 82222] 0003.journal │ └── [ 40921] Users_Search │ └── [ 40612] Journals │ ├── [ 81111] 0001.journal │ └── [ 82222] 0002.journal └── [ 40812] Journals ├── [ 81111] 0014.journal └── [ 82222] 0015.journalWith this idea, here is how a RavenDB database manages writing to the journal. When the database needs to commit a transaction, it will write to its journal, located in the Journals/ directory. If an index (a branch storage) needs to commit a transaction, it does not write to its own journal but passes the transaction to the database (the root storage), where it will be merged with any other writes (from the database or other indexes), reducing the number of write operations.The key difference here is that when we write to the journal, we check if that journal file is already associated with this storage environment. Take a look at the Journals/0015.journal file, if the Users_ByName index needs to write, it will check if the journal file is already associated with it or not.  In this case, you can see that Journals/0015.journal points to the same file (inode) as Indexes/Users_ByName/Journals/0003.journal. What this means is that the shared journals mode is only applicable for committing transactions, there have been no changes required for the reads / recovery side. That is a major plus for this sort of a critical feature since it means that we can rely on code that we have proven to work over 15 years.The single writer mode makes it workA key fact to remember is that there is always only a single writer to the journal file. That means that there is no need to worry about contention or multiple concurrent writes competing for access. There is one writer and many readers (during recovery), and each of them can treat the file as effectively immutable during recovery.The idea behind relying on hard links is that we let the operating system manage the file references. If an index flushes its file and is done with a particular journal file, it can delete that without requiring any coordination with the rest of the indexes or the database. That lack of coordination is a huge simplification in the process.In the same manner, features such as copying & moving folders around still work. Moving a folder will not break the hard links, but copying the folder will. In that case, we don’t actually care, we’ll still read from the journal files as normal. When we need to commit a new transaction after a copy, we’ll create a new linked file. That means that features such as snapshots just work (although restoring from a snapshot will create multiple copies of the “same” journal file). We don’t really care about that, since in short order, the journals will move beyond that and share the same set of files once again.In the same way, that is how we’ll migrate from the old system to the new one. It is just a set of files on disk, and we can just create new hard links as needed.Advanced scenarios behaviorI mentioned earlier that a well-known technique for database optimizations is to split the database file and the journals into separate volumes (which provides higher overall I/O throughput). If the database and the indexes reside on different volumes, we cannot use hard links, and the entire premise of this feature fails. In practice, at least for our users’ scenarios, that tends to be the exception rather than the rule. And shared journals are a relevant optimization for the most common deployment model.Additional optimizations - vectored I/OThe idea behind shared journals is that we can funnel the writes from multiple environments through a single pipe, presenting the disk with fewer (and larger) writes. The fact that we need to write multiple buffers at the same time also means that we can take advantage of even more optimizations.In Windows, we can use WriteFileGather to submit a single system call to merge multiple writes from many indexes and the database. On Linux, we use pwritev for the same purpose. The end result is additional optimizations beyond just the merged writes. I have been working on this set of features for a very long time, and all of them are designed to be completely invisible to the user. They either give us a lot more flexibility internallyor they are meant to just provide better performance without requiring any action from the user.I’m really looking forward to showing the performance results. We’ll get to that in the next post…

Detect missing migrations in Entity Framework Core

by Gérald Barré

posted on: February 17, 2025

Entity Framework Core allows to update the database schema using migrations. The migrations are created manually by running a CLI command. It's easy to forget to create a new migration when you change the model. To ensure the migrations are up-to-date, you can write a test that compares the current