skip to content
Relatively General .NET

High performance .NET

by Oren Eini

posted on: July 19, 2022

I’m going to go back a few steps and try to see where I should be looking at next, to see where I should pay the most attention. So far in this series, I mostly focused on how we read and process the data. But I think that we ought to take a step or two back and see where we are at in general. I ran the version with Pipelines and string usage in the profiler, trying to see where we are at. For example, in a previous post, the ConcurrentDictionary that I was using had a big performance cost. Is that still the case now? Here are the current hotspots in the codebase: Looking at this with more detail, we have: That is… interesting. Let’s look at the code for HandleConnection right now? This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters public async Task HandleConnection() { while (true) { var result = await _netReader.ReadAsync(); var (consumed, examined) = ParseNetworkData(result); _netReader.AdvanceTo(consumed, examined); await _netWriter.FlushAsync(); } } view raw HandleConnection.cs hosted with ❤ by GitHub Looking at the code and the profiler results, I wonder if I can do better here. Here is a small change that gives me ~2% speed boost: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters public async Task HandleConnection() { ValueTask<ReadResult> readTask = _netReader.ReadAsync(); ValueTask<FlushResult> flushTask = ValueTask.FromResult(new FlushResult()); while (true) { var result = await readTask; await flushTask; var (consumed, examined) = ParseNetworkData(result); _netReader.AdvanceTo(consumed, examined); readTask = _netReader.ReadAsync(); flushTask = _netWriter.FlushAsync(); } } view raw HandleConnection.Optimize.cs hosted with ❤ by GitHub The idea is that we parallelize reading from and writing to the network. It is a small boost, but any little bit helps, especially once we get into the cascading impacts of repeated optimizations. Looking into this, we have almost two billion calls to ReadAsync, let’s see what is costly there: That is… wow. Why is InternalTokenSource so expensive? I’m willing to bet that the issue is this one, it is taking a lock. In my use case, I know that there is a single thread running this, so it is worth seeing if I can skip it. Unfortunately, there isn’t an easy way to skip that check. Fortunately, I can copy the code from the framework and modify it locally, to see what the impact of that would be. So I did just that (initialized once in the constructor): Of course, that is very much a brute force approach, and not one that I would recommend. Looking at the code, it looks like there is a reason for this usage (handling cancellation of operations), but I’m ignoring that for now. Let’s see what the profiler says now: That means that we have about 40% improvements in per call costs. As I mentioned, that is not something that we can just do, but it is an interesting observation on the cost of reading using PipeReader. Another aspect that is really interesting is the backend state we have, which is a ConcurrentDictionary. If we’ll look at its cost, we have: You’ll note that I’m using the NonBlocking NuGet package, which provides a ConcurrentDictionary implementation that isn’t using locking. If we’ll use the default one from the framework, which does use locking, we’ll see: You can see the difference costs better here: Note that there is a significant cost difference between the two options (in favor of NonBlocking). But it doesn’t translate to much of a difference when we run a real world benchmark. So what is next? Looking at the profiler result, there isn’t really much that we can push forward. Most of the costs we have are in the network, not in the code we run. Well, that isn’t quite true, is it? The bulk of our code is in ParseNetworkData call, which looks like this: So the total time we spend actually executing the core functionality of our server is really negligible. A lot of time is actually spent parsing the commands from the buffer. Note that here, we don’t actually do any I/O, all operations are operating on buffers in memory. The Redis protocol isn’t that friendly for machine parsing, requiring us to do a lot of lookups to find the delimiters (hence the IndexOf() calls). I don’t believe that you can significantly improve on this. This means that we have to consider other options for better performance. We are spending 35% of our runtime in parsing the command streams from the client, and the code we execute is less than 1% of our runtime. I don’t think that there are significant optimization opportunities remaining for the stream parsing, so that leaves us with the I/O that we have left. Can we do better? We are currently using async I/O and pipelines. Looking at the project that got me interested in this topic, it is using IO_Uring (via this API) on Linux for their needs. Their parsing is straightforward, as well, see here. Quite similar to the manner in which my code operates. So to get to the next stage in performance (as a reminder, we are now at the 1.8 million req / sec) we’ll probably need to go to the ring based approach as well. There is a NuGet package to support it, but that moves this task from something that I can spend a few hours in an evening to a couple of days / full week of effort. I don’t think that I’ll pursue this in the near future.

Production postmortem

by Oren Eini

posted on: July 18, 2022

RavenDB introduced a TCP compression feature in version 5.3. The idea is that all internal communication in the cluster (as well as subscriptions), will use the Zstd compression format to reduce the overall bandwidth utilization by RavenDB. We have always supported HTTP compression, and that closed the circle. The fact hat we are using Zstd means that we have a higher compression ratio and less CPU usage, so everyone was happy.  Except… sometimes, they weren’t. In some cases, we noticed that there would be network failures at a far higher rate than previous experienced. RavenDB is robust to network errors, so that was handled, but that is still a concern. We figured out that the problem was rooted in the compression code. If we enabled compression between the systems, it would have far higher rate of failures than otherwise. But only when running in secured mode, when the system is running without security, everything works. My first suspicion is that something is in the network, monitoring it. But the whole point of secured mode is that no one can peek into the stream not interfere with the contents. Given that this is a self-healing issue, it took some time to dedicate the right amount of attention to it, but we managed to figure it out. This is a confluence of three different features that all play together to get this to happen. With compression, we typically do something like this: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters public void Write(byte[] buffer) { byte[] compressed = Compress(buffer); _inner.Write(compressed); } view raw compress.cs hosted with ❤ by GitHub That is pretty much how all compression stream will work. But we do have to consider the following issue, there may be no output. When can that happen? Let’s assume that I’m using the simplest compression algorithm (run length encoding). In other words, it will take a buffer such as: aaaaaacccccccbbbb and turn that into a7c6b4. Now, let’s consider what would be the output of such an algorithm if we pass it a buffer consisting of a single value? This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters public struct Compressor { byte _lastByte; byte _len; public byte[] Write(byte[] buf) { List<byte> output = new(); int index = 0; while(index < buf.Length) { if (buf[index++] == _lastByte && _len < byte.MaxValue) { _len++; continue; } output.Add(_lastByte); output.Add(_len); _lastByte = _buf[index]; _len = 1; } return output.ToArray(); } } view raw RunLengthCompression.cs hosted with ❤ by GitHub It will only update its internal state, it will not output anything. That is fine, we need a call to Flush() to ensure that all the state is out. That means that this will return an empty buffer, which we are then writing to the inner stream. And that is fine, right? Since writing a zero length buffer is a no-op. Except that it isn’t a no-op. There is the concept of empty SSL records, mostly it seams to handle the BEAST attack. So when you pass an empty buffer to the SslStream, it will emit an empty record to the network. Which is good, except that you may have a scenario where you emit a lot of those values. And it turns out that OpenSSL has a limit to how many consecutive empty records it will accept (under the assumption that it must move forward and produce output and not just loop). So, in order to repeat this bug, we need: Input that will result in zero output from the compressor (fully repeating previous values, usually). Resulting in a zero length buffer as the output of the compression. Sending the empty SSL record over the stream. Repeating this for 32 times. When all three conditions are satisfied, we get an error on the receiving end and the connection is broken. That means that the next call will have a different compression state and likely won’t have a problem at the same location. In short, this is fun exercise in seeing how three different design decisions, all of whom are eminently reasonable, result in a very hard to trace bug. The good thing is that this is simplicity itself to solve. We just need to avoid writing zero length buffer to the stream.

Using IAsyncEnumerable in a Razor component

by Gérald Barré

posted on: July 18, 2022

IAsyncEnumerable<T> is a new interface that is used to fetch asynchronous data. For instance, you can use it to fetch data from a paginated REST API. Indeed, you will need multiple async http requests to get all data, so it matches this interface. Let's see how you can use a method that retur

Recording

by Oren Eini

posted on: July 15, 2022

A few days ago I talked for close to three hours discussing RavenDB, modeling data in a non relational world and diving very deep into the internals of database engines in the .NET Zurich users group. You can watch the recording here.  As usual, any and all feedback is welcome.

Upcoming webinar

by Oren Eini

posted on: July 14, 2022

Next week I’ll be presenting a new major feature for the new RavenDB 5.4 release. Built-in ETL for Kafka and RabbitMQ. Instead of your documents just sitting there in your database, you can involve them in your messaging transactions. You can register for the webinar here…Please register, I would love to hear your feedback on the topic.

Production postmortem

by Oren Eini

posted on: July 13, 2022

We got a call from a customer, a pretty serious one. RavenDB is used to compute billing charges for customers. The problem was that in one of their instances, the value for a particular customer was wrong. What was worse was that it was wrong on just one instance of the cluster. So the customer would see different values in different locations. We take such things very seriously, so we started an investigation. Let me walk you through reproducing this issue, we have three collections (Users, Credits and Charges): The user is performing actions in the system, which issue charges. This is balanced by the Credits in the system for the user (payment they made). There is no 1:1 mapping between charges and credits, usually. Here is an example of the data: And now, let’s look at the index in question: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters // maps from u in docs.Users select new { u.Id, u.Name, Amount = 0 } from c in docs.Charges select new { Id = c.User, Amount = -c.Amount, Name = default(string) } from c in docs.Credits select new { Id = c.User, c.Amount, Name = default(string) } // reduce from r in results group r by r.Id into g let user = g.FirstOrDefault(x=>x.Name != null) select new { user.Name, user.Id, Amount = g.Sum(x=>x.Amount) } view raw index.cs hosted with ❤ by GitHub This is a multi map-reduce index that aggregates data from all three collections. Now, let’s run a query: This is… wrong. The charges & credits should be more or less aligned. What is going on? RavenDB has a feature called Map Reduce Visualizer, to help work with such scenarios, let’s see what this tells us, shall we? What do we see in this image? You can see that we have two results for the index. Look at Page #854 (at the top), we have one result with –67,343 and another with +67,329. The second result also does not have an Id property or a Name property. What is going on? It is important to understand that the image that we have here represents the physical layout of the data on disk. We run the maps of the documents, and then we run the reduce on each page individually, and sum them up again. This approach allows us to handle even a vast amount of data with ease. Look at what we have in Page #540. We have two types of documents there, the users/ayende document and the charges documents. Indeed, at the top of Page #540 we can see the result of reducing all the results in the page. The data looks correct. However… Look at Page #865, what is going on there? Looks like we have most of the credits there. Most importantly, we don’t have the users/ayende document there. Let’s take a look at the reduce definition we have: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters from r in results group r by r.Id into g let user = g.FirstOrDefault(x=>x.Name != null) select new { user.Name, user.Id, Amount = g.Sum(x=>x.Amount) } view raw reduce.cs hosted with ❤ by GitHub What would happen when we execute it on the results in Page #865? Well, there is no entry with the Name property there. So there is no Name, but there is also no Id. But we project this out to the next stage. When we are going to reduce the data again among all the entries in Page #854 (the root one), we’ll group by the Id property, but the Id property from the different pages is different. So we get two separate results here. The issue is that the reduce function isn’t recursive, it assumes that in all invocations, it will have a document with the Name property. That isn’t valid, since RavenDB is free to shuffle the deck in the reduce process. The index should be robust to reducing the data multiple times. Indeed, that is why we had different outputs on different nodes, since we don’t guarantee that will process results in the same order, only that the output should be identical, if the reduce function is correct. Here is the fixed version: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters from r in results group r by r.Id into g let user = g.FirstOrDefault(x=>x.Name != null) select new { user.Name, - user.Id + Id = g.Key, Amount = g.Sum(x=>x.Amount) } view raw reduce.diff hosted with ❤ by GitHub And the query is now showing the correct results: That is much better

Production postmortem

by Oren Eini

posted on: July 12, 2022

A customer called us, complaining that RavenDB isn’t supporting internationalization. That was a big term to unpack. It boiled down to a simple issue. They were using Hebrew text in their system, consuming us from a node.js client, and they observed that sometimes, RavenDB would corrupt the data. They would get JSON similar to this: { “Status”: "�", “Logged: true } That… is not good. And also quite strange. I’m a native Hebrew speaker, so I threw a lot of such texts into RavenDB in the past. In fact, one of our employees built a library project for biblical texts, naturally all in Hebrew. Another employee maintained a set of Lucene analyzers for Hebrew. I think that I can safely say that RavenDB and Hebrew has been done. But the problem persisted. What was worse, it was not consistent. Every time that we tried to see what is going on, it worked. We added code inside of RavenDB to try to detect what is going on, and there was nothing there. Eventually we tried to look into the Node.js RavenDB client, because we exhausted everything else. It looked okay, and in our tests, it… worked. So we sat down and thought about what it could be. Let’s consider the actual scenario we have on hand: Hebrew characters in JSON are being corrupted. RavenDB uses UTF-8 encoding exclusively. That means that Hebrew characters are using multi byte characters That line of thinking led me to consider that the problem is related to chunking. We read from the network in chunks, and if the chunk happened to fall on a character boundary, we mess it up, maybe? Once I started looking into this, the fix was obvious: Here we go: ‍! This bug is a great example of how things can not show up in practice for a really long time. In order to hit this you need chunking to happen in just the wrong place, and if you are running locally (as we usually do when troubleshooting), the likelihood you’ll see this is far lower. Given that most JSON property names and values are in the ASCII set, you need a chunk of just the right size to see it. Once we know about it, reproducing it is easy, just create a single string that is full of multi byte chars (such as an emoji) and make it long enough that it must be chunked. The fix was already merged and released.

Production postmortem

by Oren Eini

posted on: July 11, 2022

A customer opened a support call telling us that they reached the scaling limits of RavenDB. Given that they had a pretty big machine specifically to handle the load they were expecting, they were (rightly) upset about that. A short back and forth caused us to realize that RavenDB started to fail shortly after they added a new customer to their system. And by fail I mean that it started throwing OutOfMemoryException in certain places. The system was not loaded and there weren’t any other indications of high load. The system had plenty of memory available, but critical functions inside RavenDB would fail because of out of memory errors. We looked at the actual error and found this log message: Raven.Client.Exceptions.Database.DatabaseLoadFailureException: Failed to start database orders-21 At /data/global/ravenData/Databases/orders-21 ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Threading.Thread.StartInternal(ThreadHandle t, Int32 stackSize, Int32 priority, Char* pThreadName) at System.Threading.Thread.StartCore() at Raven.Server.Utils.PoolOfThreads.LongRunning(Action`1 action, Object state, String name) in C:\Builds\RavenDB-5.3-Custom\53024\src\Raven.Server\Utils\PoolOfThreads.cs:line 91 at Raven.Server.Documents.TransactionOperationsMerger.Start() in C:\Builds\RavenDB-5.3-Custom\53024\src\Raven.Server\Documents\TransactionOperationsMerger.cs:line 76 at Raven.Server.Documents.DocumentDatabase.Initialize(InitializeOptions options, Nullable`1 wakeup) in C:\Builds\RavenDB-5.3-Custom\53024\src\Raven.Server\Documents\DocumentDatabase.cs:line 388 at Raven.Server.Documents.DatabasesLandlord.CreateDocumentsStorage(StringSegment databaseName, RavenConfiguration config, Nullable`1 wakeup) in C:\Builds\RavenDB-5.3-Custom\53024\src\Raven.Server\Documents\DatabasesLandlord.cs:line 826 This is quite an interesting error. To start with, this is us failing to load a database, because we couldn’t spawn the relevant thread to handle transaction merging. That is bad, but why? It turns out that .NET will only consider a single failure scenario for a thread failing to start. If it fails, it must be because the system is out of memory. However, we are running on Linux, and there are other reasons why that can happen. In particular, there are various limits that you can set on your environment that would limit the number of threads that you can set. There are global knobs that you should look at first, such as those: /proc/sys/kernel/threads-max /proc/sys/kernel/pid_max /proc/sys/vm/max_map_count Any of those can serve as a limit. There are also ways to set those limits on a per process manner. There is also a per user setting, which is controlled via: /etc/systemd/logind.conf: UserTasksMax The easiest way to figure out what is going on is to look at the kernel log at that time, here is what we got in the log: a-orders-srv kernel: cgroup: fork rejected by pids controller in /system.slice/ravendb.service That made it obvious where the problem was, in the ravendb.service file, we didn’t have TasksMax set, which meant that it was set to 4915 (probably automatically set by the system depending on some heuristic). When the number of databases and operations on the database reached a particular size, we hit the limit and started failing. That is not a fun place to be in, but at least it is easy to fix. I created this post specifically so it will be easy to Google that in the future. I also created an issue to get a better error message in this scenario.

Replace characters in a string using Vectorization

by Gérald Barré

posted on: July 11, 2022

This post is part of the series 'SIMD'. Be sure to check out the rest of the blog posts of the series!Faster Guid comparisons using Vectors (SIMD) in .NETFinding the maximum value in an array using vectorizationReplace characters in a string using Vectorization (this post)I continue to look at code