skip to content
Relatively General .NET

Keeping your software up to date using winget and PowerShell

by Gérald Barré

posted on: July 10, 2023

winget is a package manager for Windows. You can use winget install <package> to install new software. You can also use winget upgrade <package> or winget upgrade --all --silent to upgrade one or all installed software. But what if you want to upgrade only a subset of your installed sof

Solving heap corruption errors in managed applications

by Oren Eini

posted on: July 05, 2023

RavenDB is a .NET application, written in C#. It also has a non trivial amount of unmanaged memory usage. We absolutely need that to get the proper level of performance that we require. With managing memory manually, there is also the possibility that we’ll mess it up. We run into one such case, when running our full test suite (over 10,000 tests) we would get random crashes due to heap corruption. Those issues are nasty, because there is a big separation between the root cause and the actual problem manifesting. I recently learned that you can use the gflags tool on .NET executables. We were able to narrow the problem to a single scenario, but we still had no idea where the problem really occurred. So I installed the Debugging Tools for Windows and then executed: &"C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\gflags.exe" /p /enable C:\Work\ravendb-6.0\test\Tryouts\bin\release\net7.0\Tryouts.exe What this does is enable a special debug heap at the executable level, which applies to all operations (managed and native memory alike). With that enabled, I ran the scenario in question: PS C:\Work\ravendb-6.0\test\Tryouts>  C:\Work\ravendb-6.0\test\Tryouts\bin\release\net7.0\Tryouts.exe 42896 Starting to run 0 Max number of concurrent tests is: 16 Ignore request for setting processor affinity. Requested cores: 3. Number of cores on the machine: 32.         To attach debugger to test process (x64), use proc-id: 42896. Url http://127.0.0.1:51595 Ignore request for setting processor affinity. Requested cores: 3. Number of cores on the machine: 32.  License limits: A: 3/32. Total utilized cores: 3. Max licensed cores: 1024 http://127.0.0.1:51595/studio/index.html#databases/documents?&database=Should_correctly_reduce_after_updating_all_documents_1&withStop=true&disableAnalytics=true Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.    at Sparrow.Server.Compression.Encoder3Gram`1[[System.__Canon, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].Encode(System.ReadOnlySpan`1<Byte>, System.Span`1<Byte>)    at Sparrow.Server.Compression.HopeEncoder`1[[Sparrow.Server.Compression.Encoder3Gram`1[[System.__Canon, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]], Sparrow.Server, Version=6.0.0.0, Culture=neutral, PublicKeyToken=37f41c7f99471593]].Encode(System.ReadOnlySpan`1<Byte> ByRef, System.Span`1<Byte> ByRef)    at Voron.Data.CompactTrees.PersistentDictionary.ReplaceIfBetter[[Raven.Server.Documents.Indexes.Persistence.Corax.CoraxDocumentTrainEnumerator, Raven.Server, Version=6.0.0.0, Culture=neutral, PublicKeyToken=37f41c7f99471593],[Raven.Server.Documents.Indexes.Persistence.Corax.CoraxDocumentTrainEnumerator, Raven.Server, Version=6.0.0.0, Culture=neutral, PublicKeyToken=37f41c7f99471593]](Voron.Impl.LowLevelTransaction, Raven.Server.Documents.Indexes.Persistence.Corax.CoraxDocumentTrainEnumerator, Raven.Server.Documents.Indexes.Persistence.Corax.CoraxDocumentTrainEnumerator, Voron.Data.CompactTrees.PersistentDictionary)    at Raven.Server.Documents.Indexes.Persistence.Corax.CoraxIndexPersistence.Initialize(Voron.StorageEnvironment) That pinpointed things so I was able to know exactly where we are messing up. I was also able to reproduce the behavior on the debugger: This saved me hours or days of trying to figure out where the problem actually is.

Café debug - Interview with Oren Eini CEO of RavenDB

by Oren Eini

posted on: July 04, 2023

The minimal API AOT compilation template

by Andrew Lock

posted on: July 04, 2023

Exploring the .NET 8 preview - Part 2

Production postmortem

by Oren Eini

posted on: July 03, 2023

We got a support call from a client, in the early hours of the morning, they were getting out-of-memory errors from their database and were understandably perturbed by that. They are running on a cloud system, so the first inclination of the admin when seeing the problem was deploying the server on a bigger instance, to at least get things running while they investigate. Doubling and then quadrupling the amount of memory that the system has had no impact. A few minutes after the system booted, it would raise an error about running out of memory. Except that it wasn’t actually running out of memory. A scenario like that, when we give more memory to the system and still have out-of-memory errors can indicate a leak or unbounded process of some kind. That wasn’t the case here. In all system configurations (including the original one), there was plenty of additional memory in the system. Something else was going on. When our support engineer looked at the actual details of the problem, it was quite puzzling. It looked something like this: System.OutOfMemoryException: ENOMEM on Failed to munmap at Sparrow.Server.Platform.Posix.Syscall.munmap(IntPtr start, UIntPtr length); That error made absolutely no sense, as you can imagine. We are trying to release memory, not allocate it. Common sense says that you can’t really fail when you are freeing memory. After all, how can you run out of memory? I’m trying to give you some, damn it! It turns out that this model is too simplistic. You can actually run out of memory when trying to release it. The issue is that it isn’t you that is running out of memory, but the kernel. Here we are talking specifically about the Linux kernel, and how it works. Obviously a very important aspect of the job of the kernel is managing the system memory, and to do that, the kernel itself needs memory. For managing the system memory, the kernel uses something called VMA (virtual memory area). Each VMA has its own permissions and attributes. In general, you never need to be aware of this detail. However, there are certain pathological cases, where you need to set up different permissions and behaviors on a lot of memory areas. In the case we ran into, RavenDB was using an encrypted database. When running on an encrypted database, RavenDB ensures that all plain text data is written to memory that is locked (cannot be stored on disk / swapped out). A side effect of that is that this means that for every piece of memory that we lock, the kernel needs to create its own VMA. Since each of them is operated on independently of the others. The kernel is using VMAs to manage its own map of the memory. and eventually, the number of the items in the map exceeds the configured value. In this case, the munmap call released a portion of the memory back, which means that the kernel needs to split the VMA to separate pieces. But the number of items is limited, this is controlled by the vm.max_map_count value. The default is typically 65530, but database systems often require a lot more of those. The default value is conservative, mind. Adjusting the configuration would alleviate this problem, since that will give us sufficient space to operate normally.

How to cancel GitHub workflows when pushing new commits on a branch

by Gérald Barré

posted on: July 03, 2023

When you push multiple commits to a branch, you may want to cancel in-progress workflows and run only the latest workflow. This can be useful to reduce costs if you use paid runners. On free runners, GitHub limits the number of concurrent jobs. So, it can also release resources for other workflows.

RavenDB Docker image updates for the v6.0 release

by Oren Eini

posted on: June 30, 2023

We are going to be making some changes to our RavenDB 6.0 docker image, you can already access them using the nightly builds: docker pull ravendb/ravendb-nightly:6.0-ubuntu-latest The purpose of those changes is to make it easier to run RavenDB in your environment and to make it a more natural fit to a Linux system.  The primary reason we made this change is that we wanted to enable running RavenDB containers as non-root users. Most of the other changes are internal, primarily about file paths and how we are actually installing RavenDB on the container instance. We now share a single installation process across all Linux systems, which makes is easier to support and manage. This does have an impact on users' migration from RavenDB 5.4 docker images, but the initialization process should migrate them seamlessly. Note that if you have an existing 5.4 docker instance and you want to update that to 6.0 and run as non-root, you may need to explicitly grant permissions to the RavenDB data folder for the RavenDB user (uid: 999). As usual, I would like to invite you to take the system for a spin. We would really love your feedback.

How to access the Roslyn compilation from a VS extension

by Gérald Barré

posted on: June 29, 2023

The following code is tested on Visual Studio 2022 17.7 and uses the Community toolkit for Visual Studio extensions. Don't forget to install the item templates to create a new extension project.After creating a new extension, you need to reference the Microsoft.VisualStudio.LanguageServices package

Using the new configuration binder source generator

by Andrew Lock

posted on: June 27, 2023

Exploring the .NET 8 preview - Part 1

Generating sequential numbers in a distributed manner

by Oren Eini

posted on: June 26, 2023

On its face, we have a simple requirement: Generate sequential numbers Ensure that there can be no gaps Do that in a distributed manner Generating the next number in the sequence is literally as simple as ++, so surely that is a trivial task, right? The problem is with the second requirement. The need to ensure that there are no gaps comes often when dealing with things like invoices. The tax authorities are really keen on “show me all your invoices”, and if there are gaps in the numbers, you may have to provide Good Answers. You may think that the third one, running in a distributed environment, is the tough challenge, but that isn’t actually the case. If we are running in a single location, that is fairly easy. Run the invoice id generation as a transaction, and you are done. But the normal methods of doing that are usually wrong in edge cases. Let’s assume that we use an Oracle database, which uses the following mechanism to generate the new invoice id: invoice_seq.NEXTVAL Or SQL Server with an identity column: CREATE TABLE invoices ( invoice_id INT IDENTITY(1,1) PRIMARY KEY, ... ); In both cases, we may insert a new value to the invoices table, consuming an invoice id. At some later point in time, we may roll back the transaction. Care to guess what happens then? You have INVOICE #1000 and INVOICE #1002, but nothing in between. In fact, no way to even tell what happened, usually. In other words, identity, sequence, serial, or autonumber – regardless of what database platform you use, are not suitable for generating gapless numbers. The reasoning is quite simple. Assume that you have two concurrent transactions, which generate two new invoices at roughly the same time. You commit the later one before the first one, and roll back the first. You now have: Invoice #1 Invoice #2 … Invoice #1000 Invoice #1002 However, you don’t have Invoice #1001, and you cannot roll back the sequence value there, because if you do so, it will re-generate the #1002 on the next call. Instead, for gapless numbers, we need to create this as a dedicated part of the transaction. So there would be a record in our system that contains the NextInvoiceId, which will be incremented as part of the new invoice creation. In order to ensure that there are no gaps, you need to ensure that the NextInvoideId record increment is handled as a user operation, not a database operation. In other words, in SQL Server, that is a row in a table, that you manually increment as part of adding a new invoice. Here is what this will look like: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters CREATE PROCEDURE CreateNewInvoice @customer_email VARCHAR(50), -- Other invoice parameters... AS BEGIN DECLARE @next_id INT; UPDATE next_gapless_ids WHERE owner = 'invoices' SET @next_id = invoice_id = invoice_id + 1; -- Insert the new invoice with the generated ID INSERT INTO invoices (invoice_id, customer_email, ...) VALUES (@next_id, @customer_email, ...); END view raw new_invoice.sql hosted with ❤ by GitHub As you can see, we increment the row directly. So it will be rolledback as well. The downside here is that we can no longer create two invoices concurrently. The second transaction would have to wait on the lock on the row in the next_gapless_ids table. All of that happens inside a single database server. What happens when we are running in a distributed environment? The answer in this case, is the exact same thing. You need a transaction, a distributed one, using a consensus algorithm. Here is how you can achieve this using RavenDB’s cluster wide transactions, which use the Raft protocol behind the scenes: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters while (true) { using (var session = store.OpenSession(new SessionOptions { TransactionMode = TransactionMode.ClusterWide })) { var gaplessId = session.Load<GapLessId>("gapless/invoices"); var nextId = gaplessId.Value++; var invoice = new Invoice { InvoiceId = nextId, // other properties }; session.Store(invoice, "invoices/" + nextId); try { session.SaveChanges(); break; } catch (ConcurrencyException) { continue; // re-try } } } view raw GaplessRavenDB.cs hosted with ❤ by GitHub The idea is simple, we have a transaction that modifies the gapless ids document and creates a new invoice at the same time. We have to handle a concurrency exception if two transactions try to create a new invoice at the same time (because they both want to use the same invoice id value), but in essence this is pretty much exactly the same behavior as when you are running on a single node. In other words, to ensure the right behavior, you need to use a transaction. And if you need a distributed transaction, that is just a flag away with RavenDB.