One of the “minor” changes in RavenDB 7.0 is that we moved from our own in-house logging system to using NLog. I looked at my previous blog posts, and I found the blog post outlining the rationale for the decision to use our own logging infrastructure from 2016. At the time, no other logging framework was able to sustain the kind of performance that we required. The .NET community has come a long way since then, and it has become clear that we need to revisit this decision. Performance has a much higher priority, and the API at all levels supports that (spans, avoiding allocations, etc).The move to NLog gives users a much simpler way to integrate RavenDB logs into their monitoring & observability pipeline. You can read about the new NLog feature in our blog.We also spent time making the logs view inside the RavenDB Studio nicer, taking advantage of the new capabilities we now expose:Hopefully, you won’t need to dig too deeply into the logs, but it is now easier than ever to use them.
Announcing the first preview of the .NET AI Template, for Visual Studio, Visual Studio Code, and the .NET CLI. Get started building amazing AI apps with .NET.
RavenDB 7.0 adds Snowflake integration to the set of ETL targets it supports. Snowflake is a data warehouse solution, designed for analytics and data at scale. RavenDB is aimed at transactional scenarios and has a really good story around data distribution and wide geographical deployments. You can check out the documentation to read the details about how you can use this integration to push data from RavenDB to your Snowflake database. In this post, I want to introduce one usage scenario for such integration.RavenDB is commonly deployed on the edge, running on site in grocery stores, restaurants’ self-serve kiosks, supermarket checkout counters, etc. Such environments have to be tough and resilient to errors, network problems, mishandling, and much more. We had to field support calls in the style of “there is ketchup all over the database”, for example.In such environments, you must operate completely independently of the cloud. Both because of latency and performance issues and because you must keep yourself up & running even if the Internet is down. RavenDB does very well in such a scenario because of its internal architecture, the ability to run in a multi-master configuration, and its replication capabilities.From a business perspective, that is a critical capability, to push data to the edge and be able to operate independently of any other resource. At the same time, this represents a significant challenge since you lose the ability to have an overall view of what is going on. RavenDB’s Snowflake integration is there to bridge this gap. The idea is that you can define Snowflake ETL processes that would push the data from all the branches you have to a single shared Snowflake account. Your headquarters can then run queries, analyse the data, and in general have near real-time analytics without hobbling the branches with having to manage the data remotely.The Grocery Store ScenarioIn our grocery store, we manage the store using RavenDB as the backend. With documents such as this to record sales:{
"Items": [
{
"ProductId": "P123",
"ProductName": "Milk",
"QuantitySold": 5
},
{
"ProductId": "P456",
"ProductName": "Bread",
"QuantitySold": 2
},
{
"ProductId": "P789",
"ProductName": "Eggs",
"QuantitySold": 10
}
],
"Timestamp": "2025-02-28T12:00:00Z",
"@metadata": {
"@collection": "Orders"
}
}And this document to record other inventory updates in the store:{
"ProductId": "P123",
"ProductName": "Milk",
"QuantityDiscarded": 3,
"Reason": "Spoilage",
"Timestamp": "2025-02-28T14:00:00Z",
"@metadata": {
"@collection": "DiscardedInventory"
}
}These documents are repeated many times for each store, recording the movement of inventory, tracking sales, etc. Now we want to share those details with headquarters. There are two ways to do that. One is to use the Snowflake ETL to push the data itself to the HQ’s Snowflake account. You can see an example of that when we push the raw data to Snowflake in this article.The other way is to make use of RavenDB’s map/reduce capabilities to do some of the work in each store and only push the summary data to Snowflake. This can be done in order to reduce the load on Snowflake if you don’t need a granular level of data for analytics.Here is an example of such a map/reduce index:// Index Name: ProductConsumptionSummary
// Output Collection: ProductsConsumption
map('Orders', function(order) {
return {
ProductId: order.ProductId,
TotalSold: order.QuantitySold,
TotalDiscarded: 0,
Date: dateOnly(order.Timestamp)
};
});
map('DiscardedInventory', function(discard) {
return {
ProductId: discard.ProductId,
TotalSold: 0,
TotalDiscarded: discard.QuantityDiscarded,
Date: dateOnly(order.Timestamp)
};
});
groupBy(x => ({x.ProductId, x.Date) )
.aggregate(g =>{
ProductId: g.key.ProductId,
Date: g.key.Date,
TotalSold: g.values.reduce((c, val) => g.TotlaSld + c, 0),
TotalDiscarded: g.values.reduce((c, val) => g.TotalDiscarded + c, 0)
});This index will output its documents to the artificial collection: ProductsConsumption.We can then define a Snowflake ETL task that would push that to Snowflake, like so:loadToProductsConsumption({
PRODUCT_ID: doc.ProductId,
STORE_ID: load('config/store').StoreId,
TOTAL_SOLD: doc.TotalSold,
TOTAL_DISCARDED: doc.TotalDiscarded,
DATE: doc.Date
});With that in place, each branch would push details about its sales, inventory discarded, etc., to the Snowflake account. And headquarters can run their queries and get a real-time view and understanding about what is going on globally. You can read more about Snowflake ETL here. The full docs, including all the details on how to set up properly, are here.
Microsoft.Extensions.AI.Evaluations library is now open source, and a new Azure DevOps plug-in is available to make reporting in your CI pipelines easier than ever.
To improve the UX, it can be useful to listen to clipboard changes. For instance, you can fill a 2FA code in a text box when the user copies it to the clipboard.This can be done using the AddClipboardFormatListener function. Here is a simple example of how to listen to clipboard changes in a WPF ap
The big-ticket item for RavenDB 7.0 may be the new vector search and AI integration, but those aren’t the only new features this new release brings to the table.AWS SQS ETL allows you to push data directly from RavenDB into an SQS queue. You can read the full details in our documentation, but the basic idea is that you can supply your own logic that would push data from RavenDB documents to SQS for additional processing.For example, suppose you need to send an email to the customer when an order is marked as shipped. You write the code to actually send the email as an AWS Lambda function, like so:def lambda_handler(event, context):
for record in event['Records']:
message_body = json.loads(record['body'])
email_body = """
Subject: Your Order Has Shipped!
Dear {customer_name},
Great news! Your order #{order_id} has shipped. Here are the details:
Track your package here: {tracking_url}
""".format(
customer_name=message_body.get('customer_name'),
order_id=message_body.get('order_id'),
tracking_url=message_body.get('tracking_url')
)
send_email(
message_body.get('customer_email'),
email_body
)
sqs_client.delete_message(
QueueUrl=shippedOrdersQueue,
ReceiptHandle=record['receiptHandle']
)You wire that to the right SQS queue using:aws lambda create-event-source-mapping \
--function-name ProcessShippedOrders \
--event-source-arn arn:aws:sqs:$reg:$acc:ShippedOrders \
--batch-size 10The next step is to get the data into the queue. This is where the new AWS SQS ETL inside of RavenDB comes into play. You can specify a script that reacts to changes inside your database and sends a message to SQS as a result. Look at the following, on the Orders collection:if(this.Status !== 'Completed') return;
const customer = load(this.Customer);
loadToShippedOrders({
'customer_email': customer.Email,
'customer_name': customer.Name,
'order_id': id(this),
'tracking_url': this.Shipping.TrackingUrl
});And you are… done! We have a full blown article with all the steps walking you through configuring both RavenDB and AWS that I encourage you to read.Like our previous ETL processes for queues, you can also use RavenDB in the Outbox pattern to gain transactional capabilities on top of the SQS queue. You write the messages you want to reach SQS as part of your normal RavenDB transaction, and the RavenDB SQS ETL will ensure that they reach the queue if the transaction was successfully committed.
RavenDB 7.0 is out, and the big news is vector search and AI integration in the box. You can download the new bits, run it in the cloud, or just look at the public instance to test it out.Before discussing the actual feature, I want to show you what we have done:$query = 'I feel like italian today'
from Products
where vector.search(embedding.text(Name), $query)Try that in the public instance using the sample database, here is what you get back!I wrote parts of the vector search for RavenDB, and even so, Iwas pretty amazed when I realized that this query above just works. Note that there is no actual setup to be done here. You just issue a query and ask it to use the vector.search() during execution. RavenDB handles everything else.You can read more about the vector search feature in our documentation, but the gist of it is that you can run a piece of data (text, image, or even a video) through a large language model and get a vector back. That vector allows you to query using the model’s understanding of the data. The idea is that this moves you beyond querying with keywords or even full-text search. You are now able to search for meaning and intent. You can leverage models such as OpenAI, Ollama, Grok, Mistral, or DeepSeek to give your users deep insight into their data inside RavenDB.RavenDB embeds a small model (bge-micro-v2) and can apply it during auto-indexes, both for generating embeddings for your data and for queries. As you can see, even with a tiny model, the results are spectacular. Naturally, you can also use larger models, including OpenAI, Ollama, Grok, and more. Using a large model means it has a better contextual understanding of the relationships between the data and the query and can provide more accurate results. RavenDB’s support for vector search includes:Approximate neighbor search using the HNSW algorithm.Exact neighbor search.Support for vectors using float arrays, base64 encoded, and binary attachments.RavenVector type for optimizing disk space and improving the read speed of vectors.Using full vectors or providing quantized ones to reduce disk space.Support for auto-quantization of vectors during indexing & queries.Our aim with the new RavenDB 7.0 release is to make it possible for you to find meaning - to be able to leverage vectors, embeddings, large language models, and AI without fuss or trouble. I’m really proud to say that we have exceeded all my expectations.There are a lot more exciting announcements about RavenDB & AI integration in the pipeline, and I’m really excited to share them with you in the near future.There are actually other features that are part of RavenDB 7.0, but AI is eating the world, so we’ll cover them in a separate blog post instead of giving them a tertiary spot in this one.