skip to content
Relatively General .NET

How to detect Globalization-Invariant mode in .NET

by Gérald Barré

posted on: May 22, 2023

Some libraries don't work when the application is running using Globalization-Invariant mode. This mode disabled all globalization features and force the use of the Invariant culture. This mode is useful for applications that don't need globalization features and want to reduce the application size

When and How to Use Blazor Components

by Ardalis

posted on: May 19, 2023

When and How to Use Blazor Components Blazor is a powerful framework for building web applications using C# instead of JavaScript. One of…Keep Reading →

Concurrent Hosted Service Start and Stop in .NET 8

by Steve Gordon

posted on: May 17, 2023

In this post, I will describe a new feature of the Microsoft.Extensions.Hosting library coming in .NET 8 (available since preview 4) affecting hosted services. Let’s first begin with a brief recap of hosted services. The hosting library for .NET, used in both the ASP.NET Core project template and the Worker Service template, provides the capability […]

Generate large files using PowerShell

by Gérald Barré

posted on: May 15, 2023

If you need to create a large file with random data for testing purposes, you can use PowerShell or PowerShell Core to quickly generate it.PowerShellcopy$path = Join-Path $pwd "test.txt" $size = 1GB $content = New-Object byte[] $size (New-Object System.Random).NextBytes($content) # Set-Content is

Migrating comments from Disqus to giscus

by Andrew Lock

posted on: May 09, 2023

In this post I describe a .NET script/tool I created to migrate the comments on my blog posts from my legacy Disqus account to GitHub discussions…

How to Control Visual Studio from an external application

by Gérald Barré

posted on: May 08, 2023

There are multiple use cases where you need to get information from running instances of Visual Studio. For instance, if you create a git client, you may suggest the repositories that correspond to the opened solution, or you want to know if a file is not saved in the editor and show a warning. May

Bug chasing, narrowing down the scope

by Oren Eini

posted on: May 05, 2023

I just completed a major refactoring of a piece of code inside RavenDB that is responsible for how we manage sorted queries. The first two tiers of tests all passed, which is great. Now was the time to test how this change performed. I threw 50M records into RavenDB and indexed them. I did… not like the numbers I got back. It makes sense, since I was heavily refactoring to get a particular structure, I could think of a few ways to improve performance, but I like doing this based on profiler output. When running the same scenario under the profiler, the process crashed. That is… quite annoying, as you can imagine. In fact, I discovered a really startling issue. If I index the data and query on it, I get the results I expect. If I restart the process and run the same query, I get an ExecutionEngineException. Trying to debug those is a PITA. In this case, I’m 100% at fault, we are doing a lot of unsafe things to get better performance, and it appears that I messed up something along the way. But my only reproduction is a 50M records dataset. To give some context, this means 51GB of documents to be indexed and 18 GB of indexing. Indexing this in release mode takes about 20 minutes. In debug mode, it takes a lot longer. Trying to find an error there, especially one that can only happen after you restart the process is going to be a challenging task. But this isn’t my first rodeo. Part of good system design is knowing how to address just these sorts of issues. The indexing process inside RavenDB is single-threaded per index. That means that we can rule out a huge chunk of issues around race conditions. It also means that we can play certain tricks. Allow me to present you with the nicest tool for debugging that you can imagine: repeatable traces. Here is what this looks like in terms of code: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters private readonly struct IndexOperationsDumper : IDisposable { #if GENERATE_INDEX_ACTION_TRACE private readonly FileStream _fs; private readonly BinaryWriter _bw; public IndexOperationsDumper(IndexFieldsMapping fieldsMapping) { _fs = File.OpenWrite("index.bin-log"); _fs.Position = _fs.Length; _bw = new BinaryWriter(_fs); if (_fs.Length == 0) { _bw.Write7BitEncodedInt(fieldsMapping.Count); for (int i = 0; i < fieldsMapping.Count; i++) { IndexFieldBinding indexFieldBinding = fieldsMapping.GetByFieldId(i); _bw.Write(indexFieldBinding.FieldName.ToString()); } } } public void Index(string id, Span<byte> data) { _bw.Write(id); _bw.Write7BitEncodedInt(data.Length); _bw.Write(data); } public void Commit() { _bw.Write("!Commit!"); } public void Dispose() { _bw.Dispose(); } #else public IndexOperationsDumper(IndexFieldsMapping fieldsMapping) { } public void Index(string id, Span<byte> data) { } public void Commit() { } public void Dispose() { } #endif } view raw IndexOperationsDumper.cs hosted with ❤ by GitHub In this case, you can see that this is a development only feature, so it is really bare-bones one. What it does is capture the indexing and commit operations on the system and write them to a file. I have another piece of similarly trivial code that reads and applies them, as shown below. Don’t bother to dig into that, the code itself isn’t really that interesting. What is important is that I have captured the behavior of the system and can now replay it at will. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters [Fact] public void ShouldNotCorrupt() { RequireFileBasedPager(); using var stream = File.OpenRead(@"/path/to/index.bin-log"); using var br = new BinaryReader(stream); using var bsc = new ByteStringContext(SharedMultipleUseFlag.None); using var builder = IndexFieldsMappingBuilder.CreateForWriter(false); var fieldsCount = br.Read7BitEncodedInt(); for (int i = 0; i < fieldsCount; i++) { var name = br.ReadString(); builder.AddBinding(i, name); } var fields = builder.Build(); int txns = 0; int items = 0; var wtx = Env.WriteTransaction(); try { var iw = new IndexWriter(wtx, fields); while (true) { string id; try { id = br.ReadString(); } catch (EndOfStreamException) { iw.Commit(); iw.Dispose(); wtx.Commit(); wtx.Dispose(); break; } if (id == "!Commit!") { FlushIndexAndRenewWriteTransaction(); continue; } int len = br.Read7BitEncodedInt(); var buffer = br.ReadBytes(len); iw.Index(id, buffer); items++; } using (var rtx = Env.ReadTransaction()) QueryAll(rtx); void FlushIndexAndRenewWriteTransaction() { iw.Commit(); iw.Dispose(); wtx.Commit(); wtx.Dispose(); StopDatabase(); StartDatabase(); using (var rtx = Env.ReadTransaction()) QueryAll(rtx); txns++; Console.WriteLine(txns + " With " + items); items = 0; wtx = Env.WriteTransaction(); iw = new IndexWriter(wtx, fields); } } finally { wtx.Dispose(); } StopDatabase(); StartDatabase(); using (var rtx = Env.ReadTransaction()) QueryAll(rtx); void QueryAll(Transaction rtx) { var matches = new long[1024]; for (int i = 0; i < fieldsCount; i++) { var searcher = new IndexSearcher(rtx, fields); var field = FieldMetadata.Build(fields.GetByFieldId(i).FieldName, default, i, default, default); var sort = searcher.OrderBy(searcher.AllEntries(), new OrderMetadata(field, true, MatchCompareFieldType.Sequence)); while (sort.Fill(matches) != 0) { } } } } view raw ShouldNotCorrupt.cs hosted with ❤ by GitHub The code itself isn’t much, but it does the job. What is more important, note that we have calls to StopDatabase() and StartDatabase(), I was able to reproduce the crash using this code. That was a massive win, since it dropped my search area from 50M documents to merely 1.2 million. The key aspect of this is that I now have a way to play around with things. In particular, instead of using the commit points in the trace, I can force a commit (and start / stop the database) every 10,000 items (by calling FlushIndexAndRenewWriteTransaction). When using that, I can reproduce this far faster. Here is the output when I run this in release mode: 1 With 0 2 With 10000 3 With 10000 4 With 10000 5 With 10000 6 With 10000 7 With 10000 8 With 10000 9 With 10000 10 With 10000 11 With 10000 Fatal error. Internal CLR error. (0x80131506) So now I dropped the search area to 120,000 items, which is pretty awesome. Even more important, when I run this in debug mode, I get this: 1 With 0 2 With 10000 Process terminated. Assertion failed. at Voron.Data.Containers.Container.Get(Low... So now I have a repro in 30,000 items, what is even better, a debug assertion was fired, so I have a really good lead into what is going on. The key challenge in this bug is that it is probably triggered as a result of a commit and an index of the next batch. There is a bunch of work that we do around batch optimizations that likely cause this sort of behavior. By being able to capture the input to the process and play with the batch size, we were able to reduce the amount of work required to generate a reproduction from 50M records to 30,000 and have a lead into what is going on. With that, I can now start applying more techniques to narrow down what is going on. But by far the most important aspect as far as I’m concerned is the feedback cycle. I can now hit F5 to run the code and encounter the problem in a few seconds. It looks like we hit a debug assertion because we keep a reference to an item that was already freed. That is really interesting, and now I can find out which item and then figure out why this is the case. And at each point, I can simply go one step back in the investigation, and reproduce the state, I have to hit F5 and wait a bit. This means that I can be far more liberal in how I figure out this bug. This is triggered by a query on the indexed data, and if I follow up the stack, I have: This is really interesting, I wonder… what happens if I query before I restart the database? With this structure, this is easy to do. This is actually a big relief. I had no idea why restarting the database would cause us to expose this bug. Another thing to note is that when I ran into the problem, I reproduced this on a query that sorted on a single field. In the test code, I’m testing on all fields, so that may be an asset in exposing this faster. Right now the problem reproduces on the id field, which is unique. That helps, because it removes a large swath of code that deals with multiple terms for an entry. The current stage is that I can now reproduce this issue without running the queries, and I know exactly where it goes wrong. And I can put a breakpoint on the exact location where this entry is created: By the way, note that I’m modifying the code instead of using a conditional breakpoint. This is because of the performance difference. For a conditional breakpoint, the debugger has to stop execution, evaluate the condition and decide what to do. If this is run a lot, it can have a huge impact on performance. Easier to modify the code. The fact that I can do that and hit F5 and get to the same state allows me to have a lot more freedom in the ergonomics of how I work. This allows me to discover that the entry in question was created during the second transaction. But the failure happens during the third, which is really interesting. More to the point, it means that I can now do this: With the idea that this will trigger the assert on the exact entry that cause the problem. This is a good idea, and I wish that it worked, but we are actually doing a non-trivial amount of work during the commit process, so now we have a negative feedback and another clue. This is happening in the commit phase of the indexing process. It’s not a big loss, I can do the same in the commit process as well. I have done just that and now I know that I have a problem when indexing the term: “tweets/1212163952102137856”. Which leads to this code: And at this point, I can now single step through this and figure out what is going on, I hope. When working on complex data structures, one of the things that you need to do is to allow to visualize them. Being able to manually inspect the internal structure of your data structures can save you a lot of debugging. As I mentioned, this isn’t my first rodeo. So when I narrowed it down to a specific location, I started looking into exactly what is going on. Beforehand, I need to explain a couple of terms (pun intended): tweets/1212163952102137856 – this is the entry that triggers the error. tweets/1212163846623727616 – this is the term that should be returned for 1679560 Here is what the structure looks like at the time of the insert: You can notice that the value here for the last page is the same as the one that we are checking for 1679560. To explain what is going on will take us down a pretty complex path that you probably don’t care about, but the situation is that we are keeping track of the id in two locations. Making sure to add and remove it in both locations as appropriate. However, at certain points, we may decide to shuffle things around inside the tree, and we didn’t sync that up properly with the rest of the system, leading to a dangling reference. Now that I know what is going on, I can figure out how to fix it. But the story of this post was mostly about how I figured it out, not the bug itself. The key aspect was to get to the point where I can reproduce this easily, so I can repeat it as many times that is needed to slowly inch closer to the solution.

Bug chasing, the process is more important than the result

by Oren Eini

posted on: May 04, 2023

I’m doing a pretty major refactoring inside of RavenDB right now. I was able to finish a bunch of work and submitted things to the CI server for testing. RavenDB has several layers of tests, which we run depending on context. During development, we’ll usually run the FastTests. About 2,300 tests are being run to validate various behaviors for RavenDB, and on my machine, they take just over 3 minutes to complete. The next tier is the SlowTests, which run for about 3 hours on the CI server and run about 26,000 tests. Beyond that we actually have a few more layers, like tests that are being run only on the nightly builds and stress tests, which can take several minutes each to complete. In short, the usual process is that you write the code and write the relevant tests. You also validate that you didn’t break anything by running the FastTests locally. Then we let CI pick up the rest of the work. At the last count, we had about 9 dedicated machines as CI agents and given our workload, an actual full test run of a PR may complete the next day. I’m mentioning all of that to explain that when I reviewed the build log for my PR, I found that there were a bunch of tests that failed. That was reasonable, given the scope of my changes. I sat down to grind through them, fixing them one at a time. Some of them were quite important things that I didn’t take into account, after all. For example, one of the tests failed because I didn’t account for sorting on a dynamic numeric field. Sorting on a numeric field worked, and a dynamic text field also worked. But dynamic numeric field didn’t. It’s the sort of thing that I would never think of, but we got the tests to cover us. But when I moved to the next test, it didn’t fail. I ran it again, and it still didn’t fail. I ran it in a loop, and it failed on the 5th iteration. That… sucked. Because it meant that I had a race condition in there somewhere. I ran the loop again, and it failed again on the 5th. In fact, in every iteration I tried, it would only fail on the 5th iteration. When trying to isolate a test failure like that, I usually run that in a loop, and hope that with enough iterations, I’ll get it to reproduce. Having it happen constantly on the 5th iteration was… really strange. I tried figuring out what was going on, and I realized that the test was generating 1000 documents using a random. The fact that I’m using Random is the reason it is non-deterministic, of course, except that this is the code inside my test base class: So this is properly initialized with a seed, so it will be consistent. Read the code again, do you see the problem? That is a static value. So there are two problems here: I’m getting the bad values on the fifth run in a consistent manner because that is the set of results that reproduce the error. This is a shared instance that may be called from multiple tests at once, leading to the wrong result because Random is not thread safe. Before fixing this issue so it would run properly, I decided to use an ancient debugging technique. It’s called printf(). In this case, I wrote out all the values that were generated by the test and wrote a new test to replay them. That one failed consistently. The problem was that it was still too big in scope. I iterated over this approach, trying to end up with a smaller section of the codebase that I could invoke to repeat this issue. That took most of the day. But the end result is a test like this: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters [Fact] public void CanAddAndRemoveItems() { using (var wtx = Env.WriteTransaction()) { var c = wtx.OpenContainer("test"); var items = Items; for (int i = 0; i < items.Length; i++) { switch (items[i].Item1) { case '+': Container.Allocate(wtx.LowLevelTransaction, c, items[i].Item2, out _); break; case '-': Container.Delete(wtx.LowLevelTransaction, c, items[i].Item2); break; } } wtx.Commit(); } } private (char, int)[] Items = new[] { ('+', 32), ('+', 64), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('-', 16416), ('+', 64), ('-', 16424), ('+', 96), ('-', 16432), ('+', 64), ('-', 16440), ('+', 64), ('-', 16448), ('+', 96), ('-', 16456), ('+', 96), ('-', 16464), ('+', 96), ('-', 16472), ('+', 64), ('-', 16480), ('+', 64), ('-', 16488), ('+', 96), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('-', 16528), ('+', 64), ('-', 16536), ('+', 64), ('-', 16584), ('+', 64), ('-', 16560), ('+', 64), ('-', 16520), ('+', 64), ('-', 16592), ('+', 64), ('-', 16544), ('+', 64), ('-', 16416), ('+', 96), ('-', 16424), ('+', 128), ('-', 16432), ('+', 96), ('-', 16440), ('+', 96), ('-', 16448), ('+', 128), ('-', 16456), ('+', 128), ('-', 16472), ('+', 128), ('-', 16480), ('+', 96), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('-', 16520), ('+', 96), ('-', 16568), ('+', 64), ('-', 16536), ('+', 96), ('-', 16552), ('+', 64), ('-', 16576), ('+', 64), ('-', 16416), ('+', 128), ('-', 16432), ('+', 128), ('-', 16440), ('+', 128), ('-', 16448), ('+', 160), ('-', 16456), ('+', 160), ('-', 16464), ('+', 160), ('-', 16472), ('+', 160), ('-', 16480), ('+', 128), ('-', 16488), ('+', 128), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('+', 32), ('-', 16592), ('+', 96), ('-', 16528), ('+', 96), ('-', 16584), ('+', 96), ('-', 16416), ('+', 160), ('-', 16424), ('+', 192), ('-', 16432), ('+', 160), ('-', 16440), ('+', 160), ('-', 16448), ('+', 192), ('-', 16456), ('+', 192), ('-', 16464), ('+', 192), ('-', 16472), ('+', 192), ('-', 16488), ('+', 160), ('+', 32), }; view raw SmallTest.cs hosted with ❤ by GitHub As you can see, in terms of the amount of code that it invokes, it is pretty minimal. Which is pretty awesome, since that allowed me to figure out what the problem was: I’ve been developing software professionally for over two decades at this point. I still get caught up with things like that, sigh.