skip to content
Relatively General .NET

How to deploy .NET Aspire apps to Azure Container Apps

by Jiachen Jiang

posted on: January 29, 2024

Let's take a look at how you can easily deploy .NET Aspire Apps to Azure Container Apps with just a few commands with the Azure Developer CLI!

Checking if a collection is empty in C#

by Gérald Barré

posted on: January 29, 2024

In C#, there are different ways to check if a collection is empty. Depending on the type of collection, you can check the Length, Count or IsEmpty property. Or you can use the Enumerable.Any() extension method.C#copy// array: Length int[] array = ...; var isEmpty = array.Length == 0; // List: Coun

Walkthrough: Turning a Raspberry Pi into an appliance

by Oren Eini

posted on: January 26, 2024

I’m currently playing with a Secret Project (code-named Hugin right now) and for that purpose, I literally ordered all the available Raspberry Pi in Israel. That last statement sounds like a joke, but we checked six to eight places, and our order quantity exceeds the inventory in the country. They are flying the units to us as you read this.I would love to hear what you think I’m doing, by the way. Please share  your thoughts on the matter in the comments.For Hugin, I’m playing with Pi Zero 2 W, which is about the size of a lighter. They are small, and somewhat underpowered, but really cool. They also run RavenDB surprisingly well, but I’ll touch on that in a later post.The drawback of the Zero is that basically it has two ports: a micro USB and a mini-HDMI. There is also a micro USB for power, but for doing stuff with it, just those two ports. If you are like me, you have more micro USB power cables than you know what to do with. However, micro USB on-the-go connectors or mini-HDMI are far rarer these days. I want this to be useful and easy, so I started thinking about how I could make it simpler to work with. Then I realized that the Zero model I’m using (2 W) has built-in wifi, and that meant that I could start getting smart. The idea is that we can turn the Zero into an access point, so all you’ll need is to plug it into power (using a micro USB cable you likely already have), wait half a minute, and connect to the machine. Once I had the idea, I delved deep into figuring out how to make it work. I managed, and the entire process is pretty simple from a user perspective, but it was anything but to make it work. For the rest of this post, I will be working with the Raspberry Pi Zero 2 W, using Raspberry Pi OS Lite (Legacy, 32 bits) (Debian Bullseye). I tested this on a range of Pis (I apparently got lots, from Raspberry Pi 3 B to the Raspberry Pi 400), and it worked on everything I tried.I actually tried quite hard to get it working on the Raspberry Pi OS (the non-legacy, which is Debian Bookworm). However, I couldn’t get it to behave the way I wanted it to. Setting up a wifi hotspot on Bookworm is easy, but getting it to bind DNS and DHCP to a particular device was beyond my capabilities.From my reading, it doesn’t look like I’m the only one running into issues here.The basic idea is that on connecting to a WiFi network, most devices will check connectivity and display the captive portal page if needed. In this case, we simply provide the captive portal page to our application. Hence, the only thing you need to do is to connect to the hotspot, and everything else is handled for you.This blog post was really helpful figuring things out.How this works, however, is a whole other matter. I’m assuming that you are running on a clean slate, booting for the first time on the clean image of Raspberry Pi Lite (Bullseye). The first thing to do is to set up the wifi, DNS, and DHCP, like so:sudo raspi-config nonint do_wifi_country IL sudo rfkill unblock wifi sudo apt-get install -y nginx dnsmasq dhcpcdWe first set up the country for wifi, unblock it, and install nginx,dnsmasq and dhcpcd. Our next step it update /etc/wpa_supplicant/wpa_supplicant.conf to create the actual hotspot:ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 country=IL network={ ssid="MyHotSpot" mode=2 key_mgmt=NONE frequency=2412 }We define the MyHotSpot network as an open (key_mgmt=NONE) access point (mode=2). We need to plug this into the DHCP configuration in /etc/dhcpcd.conf:hostname clientid persistent option rapid_commit option domain_name_servers, domain_name, domain_search, host_name option classless_static_routes option interface_mtu require dhcp_server_identifier slaac private env wpa_supplicant_conf=/etc/wpa_supplicant/wpa_supplicant.conf interface wlan0 static ip_address=10.1.1.1/24The last part is the most important bit. We pull the wpa_supplicant configuration that we previously defined, apply it to the WiFi device (wlan0), and register a static IP 10.1.1.1 for that interface. Basically, the WiFi interface will use that IP address as the gateway for clients connecting to it. Those clients need to get their own IP addresses, and that is the role of dnsmasq (no idea why it isn’t a dhcpcd that does it, it’s literally in the name). Here is the relevant configuration file /etc/dnsmasq.conf:listen-address=10.1.1.1 no-hosts log-queries log-facility=/var/log/dnsmasq.log dhcp-range=10.1.1.2,10.1.1.254,72h dhcp-option=option:router,10.1.1.1 dhcp-authoritative dhcp-option=114,http://awesome.appliance/ dhcp-option=160,http://awesome.appliance/ # Resolve everything to the portal's IP address. address=/#/10.1.1.1 # Android Internet Conectivity Test Domains address=/clients1.google.com/127.0.0.1 address=/clients3.google.com/127.0.0.1 address=/connectivitycheck.android.com/127.0.0.1 address=/connectivitycheck.gstatic.com/127.0.0.1There is a lot going on here. We define the DHCP range from which clients will get their IPs and set the router for this connection. We also define option 114 (and 160, which is a legacy one) to instruct the client that it needs to first visit that URL before it connecting to the wider internet. Finally, we set up DNS in such a way that all DNS entries go to the server, except for a certain set of known domains used by some Android phones to check for an internet connection. We’ll touch on that in a bit.In short, all of this configuration basically tells the Zero to create a WiFi hotspot with IP 10.1.1.1, assign connected devices IP addresses in the range 10.1.1.2 .. 10.1.1.254, set  the DNS server for those devices to 10.1.1.1, and resolve any DNS query  to IP 10.1.1.1. Also, if they care to, there is a specific URL users need to visit to get things started. In short, we are trying to guide the user to take us to the right place. One problem we have, however, is that we didn’t set up anything to respond to HTTP requests. That is why we installed nginx earlier. We configure it using /etc/nginx/sites-available/default:server { listen 80 default_server; listen [::]:80 default_server; server_name _; location / { return 302 http://awesome.appliance; } } server { listen *:80; server_name awesome.appliance; root /var/appliance/web; autoindex on; }The idea here is simple. Everything before basically directs the client to the server, all domains go to it, etc. So when a connection comes, we tell nginx that it should return a 302 response (redirect) to the portal endpoint we have. If the client is requesting the http://awesome.applianceaddress, however, we serve an actual website.All of this together ends up with an open access point that, upon connection, will direct you to a web page. This is a walled garden, of course, since we assume that the Zero is connected only to the power. Now that this is solved, you need to figure out what function you want the appliance to actually have.

Microsoft Office’s RTC (Real-Time Channel) migration to modern .NET

by Gilad Oren

posted on: January 25, 2024

Real-Time Channel is Microsoft Office Online's service that powers real time collaboration and coauthoring. This blog post describes the journey to migrate the service from .NET Framework to modern .NET.

Corax, Lucene, Benchmarks and lies!

by Oren Eini

posted on: January 24, 2024

When we started working on Corax (10 years ago!), we had a pretty simple mission statement for that: “Lucene, but 10 times faster for our use case”. When we actually started implementing this in code (early 2020), we had a few more rules about the direction we wanted to take.Corax had to be faster than Lucene in all scenarios, and 10 times faster for common indexing and querying scenarios. Corax design is meant for online indexing, not batch-oriented like Lucene. We favor moving work to indexing time and ensuring that our data structures on disk can work with no additional processing time.Lucene was created at a time when data size was much smaller and disks were far more expensive. It shows in the overall design in many ways, but one of the critical aspects is that the file design for Lucene is compressed, meaning that you need to read the data, decode that into the in-memory data structure, and then process it. For RavenDB’s use case, that turned out to be a serious problem. In particular, the issue of cold queries, where you query the database for the first time and have to pay the initialization cost, was particularly difficult. Now, cold queries aren’t really that interesting, from a benchmark perspective, you have to warm things up in every software (caches are everywhere, from your disk to your CPU). I like to say that even memory has caches (yes, plural) because it is so slow (L1, L2, L3 caches). With Lucene’s design, however, whenever it runs an indexing batch, it creates a new file, and to start querying after that means that you have a “cold start” for that file. Usually, those files are small, but every now and then Lucene needs to merge several files together and then we have to pay the cold start price for a large amount of data.The issue is that this sometimes introduces a high latency spike (hitting us in the P999 targets), which is really hard to smooth over. We spent a lot of time and engineering resources ensuring that this doesn’t have a big impact on our users. One of the design goals for Corax was to ensure that this doesn’t happen. That we are able to get consistent performance from the system without periodic maintenance tasks. That led us to a very different internal design. The persistent data structures that we use are meant to be used as is, without initial processing. Everything has a cost, and in this case, it means that the size of Corax on disk is typically somewhat larger than Lucene. The big advantage is that the amount of memory being used by Corax tends to be significantly lower. And in today’s world, disks are far cheaper than memory. Corax’s cold start time is orders of magnitude faster than Lucene’s cold start time. It turns out that there is a huge impact in another scenario as well, completely unexpected. We continuously run performance tests on our system, and we got some ridiculous results when testing query performance using encrypted databases.When you use encryption at rest, RavenDB ensures that the only time that your data is decrypted is when there is an active transaction using the data. In other words, even in-memory buffers are encrypted. That applies to documents as well as indexes. It does not apply to the in-memory data that Lucene holds in its cache, though. For Corax, however, all of its state is encrypted.When we run our benchmark on encrypted database queries, we expect to see either roughly the same performance between Corax and Lucene or see Lucene edging out Corax in this scenario, since it can use its cache without paying decryption costs.Instead, we got really puzzling results. I tried showing them in bar chart format, but I literally couldn’t make the data fit in a reasonable size. The scenario is testing queries on an encrypted database, using an m5.xlarge instance on AWS. We are hitting the server with 500 queries/second, and testing for the 99.99 percentile performance.Indexing Engine99.99% percentile (ms)99.99% percentile (seconds)Lucene40,21040.21Corax1860.18Take a look at those numbers! Somehow Corax is absolutely smoking Lucene’s lunch. And I was quite surprised about that. I mean, I’m happy, I guess, that the indexing engine we spent so much time on is doing this well, but any time that we see a performance number that we cannot explain we need to figure out what is going on.Here is the profiler output for this benchmark, using Lucene.As you can see, the vast majority of the time is spent decrypting pages. And we are decrypting pages belonging to a stream. Those are the Lucene files, stored (encrypted in this case) inside of Voron. The issue is that the access pattern that Lucene is using forces us to touch large parts of the file. It usually reads a very small portion each time, but in various locations. Given that the data is encrypted, we have to decrypt each of those locations. Corax, on the other hand, keeps the persistent data structure in such a way that when we need to access specific pages only. That means that in terms of the number of pages touched by Corax or Lucene for this particular scenario, Lucene is using a lot more. You’ll usually not notice that since Voron (our storage engine) is memory mapped and those accesses are cheap. When using encrypted storage, however, we need to decrypt the data first, so that was very noticeable. It’s interesting to note that this also applies to instances where there is a memory pressure involved. Corax would tend to touch a lot less memory and have a smaller working set, while Lucene will generate more page faults. Really interesting results, and I’m both happy and amused that totally different design decisions have led to such a big impact in this scenario. In short, Corax is fast, really fast, and in many more scenarios than we initially thought.

Introducing the MSTest Runner – CLI, Visual Studio, & More

by Amaury Levé

posted on: January 24, 2024

MSTest runner is a new, light-weight and portable runner for MSTest tests available in the .NET CLI, Visual Studio, and more!

Meta Blog

by Oren Eini

posted on: January 23, 2024

Following my previous post about updating the publishing platform of this blog, I realized that I dug myself into a hole. The new workflow was pretty sweet. To the point where I wrote my blog posts a lot more frequently than before, as you can probably tell. The problem was that I wanted to edit and process the blog post inside Google Docs, where I have a great workflow for editing, reviews, collaboration, etc. And then I want to push that same document to the blog. The killer for me is that I want that to be a smooth process, and the end text should fit into the blog. That means, if I want to emphasize something, it should be seen in the blog as bold. And if I want to write some code, that should work as well. In fact, the reason that I started this process is that it got so annoying to post code to the blog.I’m using Google Docs’ export functionality to get the HTML back, and I did some basic cleaning to get it blog-ready instead of being focused on visual fidelity. I was using HTML Agility Pack to do that, and it turned out to be the wrong tool for the job. The issue is that it processed the data as if it were an XML document. I actually got a lot of track record with XML, so that wasn’t the issue. The problem is that I wanted to do a series of non-trivial things with the HTML, and there aren’t any off-the-shelf facilities to do that in .NET that I could find. For example, given how important it is to me to show code snippets properly, I wanted to be able to grab them from the document, figure out what language I’m actually using there and syntax highlight it properly. There isn’t anything like that in .NET, all the libraries I found were for JavaScript. You know the adage about: Let’s rewrite it in Rust? I rewrote my entire publishing process to JavaScript. Which then led me to another adventure. How can I do two contrary things? When I’m writing this document, I want to be able to just write the code. When I publish it, I want to see the syntax highlighted code, properly formatted and working.Google Docs has support for writing code blocks inline (for some small number of languages), which is great for the editing process. However,  the HTML that this generates is beyond atrocious. What is even worse, in HTML, it doesn’t align things properly using fixed-sized fonts, etc. In other words, it is almost there, but not quite. When analyzing the Google Docs output, I noticed a couple of funny characters in the code output. Here is what it looks like. I believe this is a bug in the export process, probably related to the way code blocks work in Google Docs.Dear Googlers, if you are reading this, please make a note that this thing has just been Hyrum's Law. It is an observable state, and I’m relying on it to do important tasks. Don’t break this in the future.It turns out these are actually a pair of Unicode characters. More specifically, they are Unicode characters that are marked for private use:0xEC03 - appears to be used to mark the beginning of a code block0xEC02 - appears to be used to mark the end of a code blockNote the “appears”, and my blatant disregard for things like software maintenance discipline and all things proper and good in the world of Computer Science. This is a project where there are no rules, there is one customer, and he can code 🙂.As mentioned earlier, while extracting the Google Doc as HTML and processing it, I encounter those Unicode markers that delineate the code section. This is good, because in terms of HTML itself, what it is doing inside is a… mess. Getting the actual text as it is supposed to be is not easy. So I exported the file again, as text. Those markers are showing up in the textual edition as well, which made things a lot easier for me. With all of this done, allow me to show you some truly horrifying beautiful code:let blocks = []; for (const match of text.data.matchAll(/\uEC03(.*?)\uEC02/gs)) { const code = match[1].trim(); const lang = flourite(code, { shiki: true, noUnkown: true }).language; const formattedCode = Prism.highlight(code, Prism.languages[lang], lang); blocks.push("<hr/><pre class='line-numbers language-" + lang + ">" + "<code class='line-numbers language-" + lang + "'>" + formattedCode + "</code></pre><hr/>"); } let inCodeSegment = false; htmlDoc.findAll().forEach(e => { var text = e.getText().trim(); if (text == "&#60419;") { e.replaceWith(blocks[codeSegmentIndex++]); inCodeSegment = true; } if (inCodeSegment) { e.extract(); } if (text == "&#60418;") { inCodeSegment = false; } })That isn’t a lot of code, but it does plenty. We scan through the textual version of the document and find all the code blocks using a regular expression. We then try to figure out what language I’m using and apply code formatting during the publication process (this saves the need to change anything on the blog, which is nice, especially since we have to take into account syndication).I push the code snippets into an array and then I process the actual HTML document using the DOM and find all the code snippets. I replace the start marker with the actual formatted code and continue to discard all the other elements until I hit the end of the code segment. The rest of the code remains pretty much the same as before.I was writing this in VS Code and copilot suggested the following code for handling images:htmlDoc.findAll('img').forEach(img => { if (img.attrs.hasOwnProperty('src')) { let src = img.attrs.src; let imgName = src.split('/').pop(); let imgData = entries.find(e => e.entryName === 'images/' + imgName).getData(); let imgType = imgName.split('.').pop(); let imgSrc = 'data:image/' + imgType + ';base64,' + imgData.toString('base64'); img.replaceWith('<img src="' + imgSrc + '" style="float: right"/>'); } })In other words, instead of uploading the images as separate files, I can just encode them into the blog post directly. I like that idea very much because it means that I don’t have to store the images elsewhere. Given that I don’t have any npm packages to abandon, I don’t know if I can call myself a JavaScript developer, but I did put the full code up for people to take a peek and then recoil.

2023 Year in Review

by Ardalis

posted on: January 23, 2024

Now that CodeMash is over (I had a workshop and 2 talks, including a new one), I can focus enough to write up this review post. For those…Keep Reading →

.NET Framework January 2024 Cumulative Update Preview

by Salini Agarwal

posted on: January 23, 2024

January 2024 Cumulative Update Preview Updates for .NET Framework.

Testing your incremental generator pipeline outputs are cacheable

by Andrew Lock

posted on: January 23, 2024

Creating a source generator - Part 10