RavenDB has the notion of Custom Sorters, basically, we allow you to inject your own logic into the sorting process. That allows you to run any complex logic you have around sorting. There are rarely good reasons to want to use that. A good use case for that is when you need to sort by an external value that mutates outside of your control. Let’s say that you have invoices in multiple currencies. You want to sort them by their value in USD. The catch, you need them sorted on the current exchange rate. For that reason, you can use the custom sorter that would use the current value of the currency as the sorting mechanism.
I should point out that from a business perspective, you’ll typically want to use the value that you had for the order at the time the order was made, but that is not related the the custom sorting feature.
Let’s take another example, however. Consider the following Enum:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
public enum EducationStatus
{
HighSchool,
Associate,
Bachelor,
Master,
Doctor,
}
view raw
edu.cs
hosted with ❤ by GitHub
We want to sort by the education level of our candidates, but by default, we’ll be sorting using the textual value of the field. That isn’t what we want. We can define a custom sorter for that, but there is a far better option, just tell us what the order should be in the index.
Here is a good example:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
from candidate in docs.Candidates
select new
{
candidate.Name,
candidate.Education,
EducationForSorting = candidate.Education.ToString() switch
{
"HighSchool" => 12,
"Associate" => 14,
"Bachelor" => 16,
"Master" => 18,
"Doctor" => 22,
_ => -1
}
}
view raw
index.cs
hosted with ❤ by GitHub
What we are doing here is simple, we translate the textual value to a numeric one. When we query the index, we can filter by the textual value and sort by the sort value, giving us what we want. This is far simpler and more robust. If you need to add additional values down the line, it is obvious where they need to go. A custom sorter, on the other hand, is far more capable, but also more complex to operate.
In late 2021, I presented a 30-minute session at dotNetConf on the topic of Clean Architecture with ASP.NET Core 6. At the time of writing…Keep Reading →
RavenDB is a database, not a queue or a service bus. That said, you can make use of RavenDB subscriptions to get a very similar behavior to a service bus. Let’s see how much effort it will take us to implement backend processing using RavenDB only.
We assume that we have commands or messages, that are written to the Commands collection and are handled via a subscription (which may have multiple concurrent workers). In terms of your messaging models, we have:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
public record class RegisterUserCmd(string UserId, string Email) : CommandBase;
/// etc...
public record class AddCharge(string AccountId, string ChargeDescription, decimal Amount) : CommandBase;
view raw
cmds.cs
hosted with ❤ by GitHub
The CommandBase we have here defines the following infrastructure properties:
Status – enum [Initial, Processing, Failed, Completed] – default value is Initial
RetriesCount – int – default value is 3
Error – string – null by default
We can now define our subscription using the following query:
from Commands as c where c.RetriesCount > 0 and c.Status != 'Completed' and c.’@metadata’.’@refresh’ == null
This query is pretty simple, but it allows me to get all the documents that haven’t exceeded their retry count. The @refresh option allows me to register a command to be executed at a later point in time. See the documentation here, this is a feature that exists specifically to allow you to schedule commands with subscriptions.
In my subscription workers, I can now execute:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
worker.Run(async batch =>
{
using var session = batch.OpenAsyncSession();
foreach(var cmd in batch.Items)
{
cmd.ProcessingTime = DateTime.UtcNow;
cmd.Status = CommandStatus.Processing;
}
await session.SaveChangesAsycn(); // mark them as processing...
foreach(var cmd in batch.Items)
{
try
{
await cmd.ExecuteAsync(); // do work specific for the command
// mark as successful...
cmd.Status = CommandStatus.Completed;
cmd.ProcessingCompleted = DateTime.UtcNow;
}
catch(Exception e)
{
cmd.Status = CommandStatus.Failed;
cmd.RetriesAttempts--;
cmd.Error = e.ToString();
}
}
await session.SaveChangesAsync();
});
view raw
worker.cs
hosted with ❤ by GitHub
The code above is sufficient to get most of the way toward a robust message handling system.
I can easily see what messages are being processing, I can see how long they take, etc. I can see what failed and why. And I can see the history of commands.
That handles scenarios such as error handling and retries, introspection on the state of the system and you can derive from here all the relevant numbers on throughput, capacity, etc.
It isn’t a complete solution, but for very little code, you can take this quite a long way.
To use Playwright, you need to install the NuGet package and the browsers. The documentation tells us to use the following command to install the browsers:Shellcopy# Create project
dotnet new console -n PlaywrightDemo
cd PlaywrightDemo
# Install dependencies, build project and download necessary b
Enabling RavenDB’s revisions allows you to ask RavenDB to keep immutable copies of a document. We originally envisioned this feature as a way to have easy audit trails and a time travel feature. Revisions were meant to be something that you’ll typically access as the administrator, not something that we expected to be used in normal course of events.
Usage in the field showed that users often want to make revisions a core part of the domain model. We have a user that uses revisions to mark the Approved (and thus, locked) version of a Plan document, for example. Another example is a payroll processing system where the Contract for a particular employee isn’t pointing to a document, but to a specific revision of that contract. Modifications to the contract have no impact on the employee unless they are explicitly moved to a new version of the contract.
Seeing all those use cases popping up for revisions, we added more features around revisions to make it easier to work with them (such as allowing to explicitly create a revision on command or enforcing revisions policies after the fact).
In RavenDB 5.3 we have added the ability to include revisions as part of the query, so you get the same benefit of reduction in the number of remote calls as you get when working with document references. Let’s say that I want to calculate payroll for a set of employees, here is how I can do that (the Contract property contains the change vector of the specific version of the contract):
In terms of API, this is how you use it:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
using var session = store.OpenAsyncSession();
var query = await session.Query<Timecard>()
.Include(i => i.IncludeRevisions(t => t.Contract))
.Where(t => t.Project.In(projects))
.ToListAsync();
foreach(var card in query)
{
// value is in the session, no remote call here
var contract = await session.Advanced.Revisions.GetAsync<Contract>(card.Contract);
// compute time card per contract...
}
view raw
payroll.cs
hosted with ❤ by GitHub
This smooths out a scenario where you had to deal with making multiple remote calls into a single one for the whole process. Just the kind of improvement that I like to see.
JSON Patch is a feature that allows the frontend to send a set of changes on documents. If you are working with complex documents, that can result in a significant reduction in bandwidth. There are many scenarios where the client can modify a document on the browser, then produce the JSON Patch to make the server match the changes.
In RavenDB 5.3, we added direct support for implementing JSON Patch inside of RavenDB. The frontend code can forward the patch operations directly to RavenDB, where they will be executed by the database. Concurrent patches on the same document aren’t going to contend with one another and will be processed correctly. For example, in this post I’m using patch scripts to modify a document. I can do the same using JSON patch as well.
Here is how you can use the new ability:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
var jpd = new JsonPatchDocument();
jpd.Add("/Name", "Hibernating Rhinos");
store.Operations.Send(new JsonPatchOperation(documentId, jpd));
view raw
patch.cs
hosted with ❤ by GitHub
Small improvement, but can make some scenarios much easier.
I like to think about myself as a database guy. My go to joke about building user interfaces is that a <table> is all I need for layout (it’s not a joke). About a decade ago I just gave up on trying to follow what is going on in the frontend land and accepted that I’ll reside in the backend from here on after.
Being ignorant of the ways you’ll write a modern frontend doesn’t affect the fact that I like to use a good user interface. I have seriously mixed feelings about the importance of RavenDB Studio to the project. On the one hand, I care that it is easy to use, obvious and functional. I love that it is beautiful and will generally make your life easier. And at the same time, I abhor the fact that it has such an impact on people’s decisions. I mean, the backend of RavenDB is absolutely beautiful, from a technical perspective. But everyone always talk about the studio.
Leaving aside my mini rant, we spend quite a lot of time and effort on the studio and the User Experience in general. This release is not an exception and we have a couple of major new updates to the studio.
One of the most common things you’ll do in the studio is run queries. In this release we have done a complete revamp of the automatic code completion for the client-side RQL queries written in the studio.The new code assistance is available when writing any query in the Query view, Patch view, and in the Subscription Query. That was actually quite interesting, from a computer science perspective. We have formal grammar for RQL now, for example, which means that we can provide much better experience for query editing. For example, take a look:
Full code completion assistance and better error handling directly at the studio makes it easier to work with RavenDB for both developers and operations.
The second feature is the Identities page:
Identities has been a feature in RavenDB for a long time, and somehow they have never been front and center. Maybe the discoverability of the feature suffered? You can now create, edit and modify the identities directly in the studio, not just through the API.
Most of the time, you’ll communicate with RavenDB using HTTP, making REST calls. When you are doing that, you can take advantage of request compression. If the client indicates that it is able to by sending a Content-Encoding: gzip, RavenDB will send the data to you compressed. Given that we are working with JSON texts, which compress very well, we are looking at pretty significant savings in network bandwidth. This has been the case for RavenDB for many years (I didn’t check, but at least a decade, I believe).
There are certain cases, however, where RavenDB will use a binary protocol instead of HTTP. Those are usually scenarios where we are communicating directly with another RavenDB instance. All internal communications between RavenDB nodes will use direct TCP connections and when using Subscriptions, the client will open a TCP connection for the server and use that on a long term basis.
One of the fallacies of distributed computing is that bandwidth is infinite. One of the realities of cloud computing, on the other hand, is that you are paying for bandwidth. Even when you are running inside the same cloud region, cross availability zone network traffic is still charged. As you can imagine, on active systems, you may notice that you are spending a lot of bandwidth on inter cluster communication.
With RavenDB 5.3, we have added compression support for the replication and subscription connections . That means that replication and subscriptions will default for compressing the data. We are using the Zstd algorithm. In our tests, it produced both a higher compression ratio and faster performance than GZip. You don’t have to do anything for this to work (although there is a configuration option "Server.Tcp.Compression.Disable" to disable that if you really want to). When you upgrade to RavenDB 5.3, the cluster will automatically start compressing all traffic.
In our tests, we are seeing 85% (!) reduction in the amount of network traffic that we send out. That is something that I’m very much looking to seeing in our metrics once this is rolled out completely.
This is a RavenDB 5.3 feature (expected mid November) and will be available in the Professional and Enterprise editions of RavenDB.
We use cookies to analyze our website traffic and provide a better browsing experience. By
continuing to use our site, you agree to our use of cookies.