skip to content
Relatively General .NET

How to use a natural sort in PowerShell

by Gérald Barré

posted on: April 07, 2025

PowerShell doesn't provide a built-in way to use a natural sort. However, you can use the Sort-Object cmdlet with a custom script block to achieve a natural sort. In this post, I describe how you can use a natural sort in PowerShell.#What is a natural sort?A natural sort is a sorting algorithm that

RavenDB on AWS Marketplace

by Oren Eini

posted on: April 04, 2025

We just announced the general availability of RavenDB on AWS Marketplace. By joining AWS Marketplace, we provide users with a seamless purchasing experience, flexible deployment options, and direct integration with their AWS billing. You can go directly to RavenDB on AWS Marketplace here.That means:One-click cluster deploymentEasy scaling for growing workloadsHigh-availability and security on AWS Most importantly, being a partner in AWS Marketplace allows us to optimize costs and offer you flexible billing options via the Marketplace. This opens up a whole new world of opportunities for collaboration.You can find more at the following link.

RavenDB

by Oren Eini

posted on: April 02, 2025

.NET Aspire is a framework for building cloud-ready distributed systems in .NET. It allows you to orchestrate your application along with all its dependencies, such as databases, observability tools, messaging, and more. RavenDB now has full support for .NET Aspire. You can read the full details in this article, but here is a sneak peek.Defining RavenDB deployment as part of your host definition:using Projects; var builder = DistributedApplication.CreateBuilder(args); var serverResource = builder.AddRavenDB(name: "ravenServerResource"); var databaseResource = serverResource.AddDatabase( name: "ravenDatabaseResource", databaseName: "myDatabase"); builder.AddProject<RavenDBAspireExample_ApiService>("RavenApiService") .WithReference(databaseResource) .WaitFor(databaseResource); builder.Build().Run();And then making use of that in the API projects:var builder = WebApplication.CreateBuilder(args); builder.AddServiceDefaults(); builder.AddRavenDBClient(connectionName: "ravenDatabaseResource", configureSettings: settings => { settings.CreateDatabase = true; settings.DatabaseName = "myDatabase"; }); var app = builder.Build(); // here we’ll add some API endpoints shortly… app.Run();You can read all the details here. The idea is to make it easier & simpler for you to deploy RavenDB-based systems.

Introducing Rook AI

by Oren Eini

posted on: April 01, 2025

Say hello to Rook AI. RavenDB’s mascot just went beyond the singularity and then some.We cranked up the AI to a whole new level. Rook doesn’t just handle queries, it gets them. Your data, your queries, your wishes.Need a query? Done. Forgot something? Rook’s on it. Lottery numbers? Rook picked a few good ones.Powered by QLBM (Quantum Large Beak Model™), it’s always one step ahead.Clippy walked so this bird could fly.

Modernizing push notification API for Teams

by Rudolf,Frantisek

posted on: April 01, 2025

Push Notification Hub is an essential internal service that plays a crucial role in the messaging and calling flows within Teams and other platforms. This article describes its recent overhaul, which has significantly enhanced its performance and reduced latencies in delivering push notifications to user devices

AI Integration in RavenDB - Embeddings Generation

by Oren Eini

posted on: March 31, 2025

In version 7.0, RavenDB introduced vector search, enabling semantic search on text and image embeddings.For example, searching for "Italian food" could return results like Mozzarella & Pasta. We now focus our efforts to enhance the usability and capability of this feature.Vector search uses embeddings (AI models' representations of data) to search for meaning.Embeddings and vectors are powerful but complex.The Embeddings Generation feature simplifies their use.RavenDB makes it trivial to add semantic search and AI capabilities to your system by natively integrating with AI models to generate embeddings from your data.  RavenDB Studio's AI Hub allows you to connect to various models by simply specifying the model and the API key.You can read more about this feature in this article or in the RavenDB docs. This post is about the story & reasoning behind this feature.Cloudflare has a really good post explaining how embeddings work. TLDR, it is a way for you to search for meaning. That is why Ravioli shows up for Italian food, because the model understands their association and places them near each other in vector space. I’m assuming that you have at least some understanding of vectors in this post.The Embeddings Generation feature in RavenDB goes beyond simply generating embeddings for your data.It addresses the complexities of updating embeddings when documents change, managing communication with external models, and handling rate limits.The elevator pitch for this feature is:RavenDB natively integrates with AI models to generate embeddings from your data, simplifying the integration of semantic search and AI capabilities into your system.The goal is to make using the AI model transparent for the application, allowing you to easily and quickly build advanced AI-integrated features without any hassle.While this may sound like marketing jargon, the value of this feature becomes apparent when you experience the challenges of working without it.To illustrate this, RavenDB Studio now includes an AI Hub.You can create a connection to any of the following models:Basically, the only thing you need to tell RavenDB is what model you want and the API key to use. Then, it is able to connect to the model.The initial release of RavenDB 7.0 included bge-micro-v2 as an embedded model. After using that and trying to work with external models, it became clear that the difference in ease of use meant that we had to provide a good story around using embeddings.Some things I’m not willing to tolerate, and the current status of working with embeddings in most other databases is a travesty of complexity. Next, we need to define an Embeddings Generation task, which looks like this:Note that I’m not doing a walkthrough of how this works (see this article or the RavenDB docs for more details about that); I want to explain what we are doing here.The screenshot shows how to create a task that generates embeddings from the Title field in the Articles collection.For a large text field, chunking options (including HTML stripping and markdown) allow splitting the text according to your configuration and generate multiple embeddings.RavenDB supports plain text, HTML, and markdown, covering the vast majority of text formats.You can simply point RavenDB at a field, and it will generate embeddings, or you can use a script to specify the data for embeddings generation.QuantizationEmbeddings, which are multi-dimensional vectors, can have varying numbers of dimensions depending on the model.For example, RavenDB's embedded model (bge-micro-v2) has 384 dimensions, while OpenAI's text-embedding-3-large has 3,072 dimensions.Other common values for dimensions are 768 and 1,536.Each dimension in the vector is represented by a 32-bit float, which indicates the position in that dimension.Consequently, a vector with 1,536 dimensions occupies 6KB of memory.Storing 10 million such vectors would require over 57GB of memory.Although storing raw embeddings can be beneficial, quantization can significantly reduce memory usage at the cost of some accuracy.RavenDB supports both binary quantization (reducing a 6KB embedding to 192 bytes) and int8 quantization (reducing 6KB to 1.5KB).By using quantization, 57GB of data can be reduced to 1.7GB, with a generally acceptable loss of accuracy.Different quantization methods can be used to balance space savings and accuracy.CachingGenerating embeddings is expensive.For example, using text-embedding-3-small from OpenAI costs $0.02 per 1 million tokens.While that sounds inexpensive, this blog post has over a thousand tokens so far and will likely reach 2,000 by the end.One of my recent blog posts had about 4,000 tokens.This means it costs roughly 2 cents per 500 blog posts, which can get expensive quickly with a significant amount of data.Another factor to consider is handling updates.If I update a blog post's text, a new embedding needs to be generated.However, if I only add a tag, a new embedding isn't needed. We need to be able to handle both scenarios easily and transparently.Additionally, we need to consider how to handle user queries.As shown in the first image, sending direct user input for embedding in the model can create an excellent search experience.However, running embeddings for user queries incurs additional costs.RavenDB's Embedding Generation feature addresses all these issues.When a document is updated, we intelligently cache the text and its associated embedding instead of blindly sending the text to the model to generate a new embedding each time..This means embeddings are readily available without worrying about updates, costs, or the complexity of interacting with the model.Queries are also cached, so repeated queries never have to hit the model.This saves costs and allows RavenDB to answer queries faster.Single vector storeThe number of repeated values in a dataset also affects caching.Most datasets contain many repeated values.For example, a help desk system with canned responses doesn't need a separate embedding for each response.Even with caching, storing duplicate information wastes time and space.  RavenDB addresses this by storing the embedding only once, no matter how many documents reference it, which saves significant space in most datasets.What does this mean?I mentioned earlier that this is a feature that you can only appreciate when you contrast the way you work with other solutions, so let’s talk about a concrete example. We have a product catalog, and we want to use semantic search on that. We define the following AI task:It uses the open-ai connection string to generate embeddings from the Products’ Name field.Here are some of the documents in my catalog:In the screenshots, there are all sorts of phones, and the question is how do we allow ourselves to search through that in interesting ways using vector search.For example, I want to search for Android phones. Note that there is no mention of Android in the catalog, we are going just by the names. Here is what I do:$query = 'android' from "Products" where vector.search( embedding.text(Name, ai.task('products-on-openai')), $query )I’m asking RavenDB to use the existing products-on-openai task on the Name field and the provided user input. And the results are:I can also invoke this from code, searching for a “mac”:var products = session.Query<Products>() .VectorSearch( x => x.WithText("Name").UsingTask("products-on-openai"), factory => factory.ByText("Mac") ).ToList();This query will result in the following output:That matched my expectations, and it is easy, and it totally and utterly blows my mind. We aren’t searching for values or tags or even doing full-text search. We are searching for the semantic meaning of the data.You can even search across languages. For example, take a look at this query:This just works!Here is a list of the things that I didn’t have to do:Generate the embeddings for the catalogAnd ensure that they are up to date as I add, remove & update productsHandle long texts and appropriate chunkingPerform quantization to reduce storage costsHandle issues such as rate limits, model downtime (The GPUs at OpenAI are melting as I write this), and other “fun” statesCreate a vector search indexGenerate an embedding vector from the user’s inputSee above for all the details we skip hereQuery the vector search index using the generated embeddingThis allows you to focus directly on delivering solutions to your customers instead of dealing with the intricacies of AI models, embeddings, and vector search.I asked Grok to show me what it would take to do the same thing in Python. Here is what it gave me. Compared to this script, the RavenDB solution provides:Efficiently managing data updates, including skipping model calls for unchanged data and regenerating embeddings when necessary.Implementing batching requests to boost throughput.Enabling concurrent embedding generation to minimize latency.Caching results to prevent redundant model calls.Using a single store for embeddings to eliminate duplication.Managing caching and batching for queries.In short, Embeddings Generation is the sort of feature that allows you to easily integrate AI models into your application with ease. Use it to spark joy in your users easily, quickly, and without any hassle.

Create symbolic links in .NET

by Gérald Barré

posted on: March 31, 2025

Symbolic link allows to create a link to a file or directory. The symbolic link is a second file that exists independently of its target. If a symbolic link is deleted, its target remains unaffected. If the target is deleted, the symbolic link is not automatically removed. This is different from ha

Optimizing concurrent count operations

by Oren Eini

posted on: March 26, 2025

I recently reviewed a function that looked something like this:public class WorkQueue<T> { private readonly ConcurrentQueue<T> _embeddingsQueue = new(); private long _approximateCount = 0; public long ApproximateCount => Interlocked.Read(ref _approximateCount); public void Register(IEnumerable<T> items) { foreach (var item in items) { _embeddingsQueue.Enqueue(item); Interlocked.Increment(ref _approximateCount); } } }I commented that we should move the Increment() operation outside of the loop because if two threads are calling Register() at the same time, we’ll have a lot of contention here.The reply was that this was intentional since calling Interlocked.CompareExchange() to do the update in a batch manner is more complex. The issue was a lack of familiarity with the Interlocked.Add() function, which allows us to write the function as:public void Register(IEnumerable<T> items) { int count = 0; foreach (var item in items) { _embeddingsQueue.Enqueue(item); count++; } Interlocked.Add(ref _approximateCount, count); }This allows us to perform just one atomic operation on the count. In terms of assembly, we are going to have these two options:lock inc qword ptr [rcx] ; Interlocked.Increment() lock add [rbx], rcx ; Interlocked.Add()Both options have essentially the same exact performance characteristics, but if we need to register a large batch of items, the second option drastically reduces the contention. In this case, we don’t actually care about having an accurate count as items are added, so there is no reason to avoid the optimization.