skip to content
Relatively General .NET

Negative feature response

by Oren Eini

posted on: October 20, 2021

Following my previous post, which mentioned that you can save significantly on disk space if you store a plain text attachment using gzip, we go a feature request:Perhaps in future attachments could have built-in compression as well?The answer to that is no, but I thought that it is worth a post to explain why not. Let’s consider the typical types of attachments that you’ll store in RavenDB. Based on experience, we usually see:PDF filesWord / Excel / Power PointImages (JPEG, PNG, GIF, etc)VideoesDesigns (floor plans, CAD / DWG, etc)Text filesAside from the text files, pretty much all the data you’ll store as an attachment is already compressed. In fact, you’ll be hard pressed today to find any file format that does not already have built-in compression.Compressing already compressed data is… suboptimal. I will not usually lead to significant space savings and can actually make the file size larger. It also burns CPU cycles unnecessarily. It is better to shift the responsibility to the users in this case, since they have a lot more information about what they actually put into RavenDB and won’t have to guess.

When the error is byzantine

by Oren Eini

posted on: October 19, 2021

In distributed systems, the term Byzantine fault tolerance refers to working in an environment where the other nodes in the system are going to violate the invariants held by the system. Sometimes, that is because of a bug, sometimes because of a hardware issue (Figure 11) and sometimes that is a malicious action. A user called us to let us know about a serious issue, they have a 100 documents in their database, but the index reports that there are 105 documents indexed. That was… puzzling. None of the avenues of investigation we tried help us. There wasn’t any fanout on the index, for example.That was… strange.We looked at the index in more details and we noticed something really strange. There were documents ids that were in the index that weren’t in the database, and they were all at the end. So we had something like:users/1 – users/100 – in the indexusers/101 – users/105 – not in the indexWhat was even stranger was that the values for the documents that were already in the index didn’t match the values from the documents. For that matter, the last indexed etag, which is how RavenDB knows what documents to index.Overall, this is a really strange thing, and none of that is expected to happen. We asked the user for more details, and it turns out that they don’t have much. The error was reported in the field, and as they described their deployment scenario, we were able to figure out what was going on.On certain cases, their end users will want to “reset” the system. The manner in which they do that is to shut down their application and then delete the RavenDB folder. Since all their state is in RavenDB, that brings them back up at a clean state. Everything works, and this is a documented manner in which they are working.However, the manner in which the end user will delete is done as: “Delete the database files”, the actual task is done by the end user. A RavenDB database internally is composed of a few directories, for data and indexes. What would happen in the case where you deleted just the database file, but kept the index files? In that case, RavenDB will just accept that this is the case (soft delete is a a thing, so we have to accept that) and open the index file. That index file, however, came from a different database, so a lot of invariants are broken. I’m actually surprised that it took this long to find out that this is problematic, to be honest.We explained the issue, and then we spent some time ensuring that this will never be an invisible error. Instead, we will validate that the index is coming from the right source, or error explicitly. This is one bug that we won’t have to hunt ever again.

Finding a bug with code that isn’t there

by Oren Eini

posted on: October 18, 2021

A user called us with a strange bug report. He said that the SQL ETL process inside of RavenDB was behaving badly. It would write the data from the RavenDB server to the MySQL database, but then it would immediately delete it.From the MySQL logs, the user showed:2021-10-12 13:04:18 UTC:20.52.47.2(65396):root@ravendb:[1304]:LOG: execute <unnamed>: INSERT INTO orders ("id") VALUES ('Tab/57708-A')2021-10-12 13:04:19 UTC:20.52.47.2(65396):root@ravendb:[1304]:LOG: execute <unnamed>: DELETE FROM orders WHERE "id" IN ('Tab/57708-A')As you can imagine, that isn’t an ideal scenario for ETL processes. It is also something that should absolutely not happen. RavenDB issues the delete before the insert, obviously. In fact, for your viewing pleasure, here is the relevant piece of code:It doesn’t get any clearer than that, right? We issue any deletes we have, then we send the inserts. But the user insisted that they are seeing this behavior, and I tend to trust them. But we couldn’t reproduce the issue, and the code in question dates to 2017, so I was pretty certain it is correct.Then I noticed the user’s configuration. To define a SQL ETL process in RavenDB, you need to define what tables we’ll be writing to as well as deciding what data to send to those tables. Here is what this looked like:And here is the script:Do you see the problem? It might be better to use an overload, which make it clearer:In the table listing, we had an “orders” table, but the script sent the data to the “Orders” table.RavenDB is a case insensitive database, but in this case, what happened behind the scenes is that the code used a case sensitive dictionary to keep track of the tables that we are working with. That meant that what we did was roughly: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters for doc in docs: for table in tables: changes_by_table.delete(table, doc.id) results = run_script(doc) for result in results: changes_by_table.insert(result.table, result.value, doc.id) for table in changes_by_table: for delete in table.deletes: # do actual delete... for insert in table.inserts: # do actual insert... view raw etl.py hosted with ❤ by GitHub And basically, the changes_by_table dictionary had two tables in it. One from the table definitions and one from the script. When we validate that the tables from the script are fine, we do that using case insensitive comparison, so that passed properly.To make it worst, the order of items in a dictionary is not predictable. If this was the iteration order other way around, everything would appear to be working just fine. We fixed the bug, but I found it really interesting that three separate people (very experienced with the codebase) had a look and couldn’t figure out how this bug can happen. It wasn’t in the code, it was in what wasn’t there.

Finding Duplicate Documents in MongoDB

by Gérald Barré

posted on: October 18, 2021

Recently I needed to create a new unique index on a MongoDB collection. However, there were some duplicate data… so I got the following error:copyE11000 duplicate key error collection: db.collection index: index_name dup key: { key: "duplicate value" }The error message lists the first duplicated va

When you want to store, index and search MBs of text inside of RavenDB

by Oren Eini

posted on: October 15, 2021

A scenario came up from a user that was quite interesting to explore. Let’s us assume that we want to put the Gutenberg Project inside of RavenDB. An initial attempt for doing that would look like this: I’m skipping a lot of the details, but the most important field here is the Content field. That contains the actual text of the book.On last count, however, the size of the book was around 708KB. When storing that as a single field inside of RavenDB, on the other hand, RavenDB will notice that this is a long field and compress it. Here is what this looks like:The 738.55 KB is the size of the actual JSON, the 674.11 KB is after quick compression cycle and the 312KB is the actual size that this takes on disk. RavenDB is actively trying to help us.But let’s take the next step, we want to allow us to query, using full text search, on the content of the book. Here is what this will look like:Everything works, which is great. But what is going on behind the scenes?Even a single text field that is large (100s of KB or many MB) puts a unique strain on RavenDB. We need to manage that as a single unit, it significantly bloats the size of the parent document and make it more expensive to work with. This is interesting because usually, we don’t actually work all that much with the field in question. In the case of the Pride and Prejudice book, the content is immutable and not really relevant for the day to day work with the document. We are better off moving this elsewhere. An attachment is a natural way to handle this. We can move the content of the book to an attachment. In this way, the text is retained, we can still work and process that, but it is sitting on the side, not making our life harder on each interaction with the document. Here is what this looks like, note that the size of the document is tiny. Operations on that size would be much faster than a multi MB document:Of course, there is a disadvantage here, how can we index the book’s contents now? We still want that. RavenDB support that scenario explicitly, let’s define an index to do just that:You can see that I’m loading the content attachment and then accessing its content as string, using UTF8 as the encoding mechanism. I tell RavenDB to use full text search on this field, and I’m off to the races.Of course, we could stop here, but why? We can do even better. When working with large text fields, an index such as the one above will force us to materialize the entire field as a single value. For very large values, that can put a lot of pressure in terms of memory usage.But RavenDB supports more than that. Instead of processing a very large string in one shot, we can do that in an incremental fashion, avoiding big value materialization and the memory pressure associated with that. Here is what you can write:That tells RavenDB that we should process the field in a streaming fashion. Here is why it matters: Value Materialized Streaming Value When we are working on a document that has a < 1 MB attachment, it probably doesn’t matter all that much (although using 25% of the memory is nice), but it matters a lot more when you are working with larger texts.We can take this one step further still! Instead of storing the attachment text as is, we can compress it, like so:And then in the index, we’ll decompress on the fly:Note that throughout all of that, the queries that you send are exactly the same, we are just taking 20% of the disk space and 25% of the memory that we used to.

A PKI-less secure communication channel

by Oren Eini

posted on: October 12, 2021

After spending so much time building my own protocol, I decided to circle back a bit and go back to TLS itself and see if I can get the same thing for it that I make on my own. As a reminder, here is what we achieved:Trust established between nodes in the system via a back channel, not Public Key Interface. For example, I can have: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters [ { "Name": "orders.app", "PublicKey": "3xPJBNRzybdD2XhxCkO9e9L7cVrjAocPc00MwB2eyv8=", "Access": [ "Orders", "Leads" ] }, { "Name": "shipping.app", "PublicKey": "UYnz8EeyEv9m2zs3w3X9Xifw4fMalv8YfgE/q4fo1Yc=", "Access": [ "Shipping", "Products", "Customers" ] } ] view raw authorizations.json hosted with ❤ by GitHub On the client side, I can define something like this:Server=northwind.database.local:9222;Database=Orders;Server Key=6HvG2FFNFIifEjaAfryurGtr+ucaNgHfSSfgQUi5MHM=;Client Secret Key=daZBu+vbufb6qF+RcfqpXaYwMoVajbzHic4L0ruIrcw=Can we achieve this using TLS? On first glance, that doesn’t seem to be possible. After all, TLS requires certificates, but we don’t have to give up just yet. One of the (new) options for certificates is Ed25519, which is a key pair scheme that uses 256 bits keys. That is also similar to what I have used in my previous posts, behind the covers. So the plan is to do the following:Generate key pairs using Ed25519 as before.Distribute the knowledge of the public keys as before.Generate a certificate using those keys.During TLS handshake, trust only the keys who we were explicitly told to trust, disabling any PKI checks.That sounds reasonable, right?  Except that I failed. To be rather more exact, I couldn’t generate a valid X509 certificate from Ed25519 key pair. Using .NET, you can use the CertificateRequest class to generate certificates, but it only supports RSA and ECDsa keys. Safe sizes for those types are probably:RSA – 4096 bits (2048 bits might also be acceptable) – key size on disk: 2,348 bytes.ECDsa – 521 bits  - key size on disk: 223 bytes.The difference between those and the 32 bytes key for Ed25519 is pretty big. It isn’t much in the grand scheme of things, for sure, but it matters. The key issue (pun intended) is that this is large enough to make it awkward to use the value directly. Consider the connection string I listed above. The keys we use here are small enough that we can just write them inline (simplest and most oblivious thing to do). The keys for either of the more commonly used RSA and ECDsa are too big for that. Here is a ECDsa key, for example:MIHcAgEBBEIBuF5HGV5342+1zk1/Xus4GjDx+FR rbOPrC0Q+ou5r5hz/49w9rg4l6cvz0srmlS4/Ysg H/6xa0PYKnpit02assuGgBwYFK4EEACOhgYkDgYY ABABaxs8Ur5xcIHKMuIA7oedANhY/UpHc3KX+SKc K+NIFue8WZ3YRvh1TufrUB27rzgBR6RZrEtv6yuj 2T2PtQa93ygF761r82woUKai7koACQZYzuJaGYbG dL+DQQApory0agJ140T3kbT4LJPRaUrkaZDZnpLA oNdMkUIYTG2EYmsjkTg== And here is an RSA key:MIIJKAIBAAKCAgEApkGWJc+Ir0Pxpk6affFIrcrRZgI8hL6yjXJyFNORJUrgnQUw i/6jAZc1UrAp690H5PLZxoq+HdHVN0/fIY5asBnj0QCV6A9LRtd3OgPNWvJtgEKw GCa0QFofKk/MTjPimUKiVHT+XgZTnTclzBP3aSZdsROUpmHs2h4eS9cRNoEnrC1u YUzaGK4OeQNLCNi1LyB6I33697+dNLVPoMJgfDnoDBV12KtpB6/pLjigYgIMwFx/ Qyx9DhnREXYst/CLQs8S/dmF+opvghhdhiUUOUwqGA/mIIbwtnhMQFKWCQXEk7km 5hNg/fyv/qwqvTkqQTZkJdj0/syPNhqnZ9RurFPkiOwPzde8I/QwOkEoOXVMboh4 Ji3Y6wwEkWSwY/9rzUK2799lzTmZlvUu2ZxNZfKxQ84vmPUCvP288KXOCU4FxIUX lujBu7aXUORtQE9oZxBSxqCSqmCEb7jGwR3JOpFlUZymK7W0jbY4rmfZL8vcDYdG r0msuXD+ggVjYzpHI7EH5MtQXYJZ2aKan5ZpSL/Lb0HsjkDLrsvMi+72FcwXH+5P Q1E30uxs5y9xOTSqff9T9x6KPAOwIpmrv4Bc3J0NgEgWiKxG9nM1+f8FkKlCRino rrF9ZrC+/l/vc67xye+Pr1tLvEFT5ARu/nR1JH/Lv/CsAU9y51wOPqD6dQUCAwEA AQKCAgBJseTWWcnitqFU8J62mM94ieCL8Q3WYZlP7Zz38lfySeCKeZRtWa/zsozm XEQY0t7+807pHPLs0OhMHlFv1GQKj09Wg4XvWWgqvLOSucC7QZ6cLfNUoUNhCxGp dbnAKGuXN9wwx7NBBljl5V4Ruf//UgxRw7YuklWk0ZjoUSrGGDX3siOtaZ17Nxwf NAB8qWKWwzSgquUmEH+kr4HeZorSRfC/+ntEUaa6y5T28g7Vosb4NYgLxJqiN3te 3B0yY6O3N4bZkyQ6TEblSdua7LCsPUCjbdi6LlZg664RDQqIcVATkwzVC14A95Mj tjkzqzU5ttxpkmP21cHdX6847QcpERgQ7NzAbjrU5UH8aBOsetaZo/1yDr5U13ah YcAq9XX6tLeAA0rUsnXKAWBQswtWIU0jXBuRRSE7xDXv+82SWEoPqZMSAv77p+uc AeogN+zzZPPet/AOERKLcGC9WoC/HT7q/H3zFAsRPoKY6qMfLFntdosc0lmRxvHv b9NXBzKdDuOiUXhdRMhL5Yld8ivvHuwRnPfcZycplSFrA9E5xo/S3RQj+Re9L0yR 8tNzjl+lcgtk8Q0CSJl6eW2Fjja5ZrvDD8qL97+WFqHR7LTTqZ7TmiT7u1MXW1Il wTuccWCQ85BzxpRbyzPXLdsxMgPCmjicX/23+srOXAk2z42bOQKCAQEAzM1Mocnd w0uoETHZH0VX29WaKVqUAecGtrj+YNujzmjLy2FPW10njgBfZgkVQETjFxUS9LBZ xv/p6fCio3NgXh3q7O/kWLxojuR8JB7n4vxoKGBwinwzi1DHp37gzjp/gGdr4mG9 8b7UeFJY8ZPz0EoXcPr3TL+69vOoLieti/Ou9W7HbpDHXYLKclFkJ0d/0AtDNaM7 kCNvI7HgC5JvCCOdGatmbB09kniQjtvE4Wh4vOg/TtH1KoKGXbC8JnjHNRjJtgqU 1mhbq36Eru8iOVME9jyHAkSPqphqeayEUdeP3C1Bc2xmrlxCQALZrAfH37ZWcf44 UuOO5TMnf5HTLwKCAQEAz9F59/xlVDHaaFpHK6ZRTQQWh6AVBDKUG2KDqRFAGQik 6YqQwJFGSo1Z+FjXzidGEHkqH6KyGtSxS6dTgqqfTC96P1rdrBab5vgdXpfSa/0S Qke2sH3eZ1vWJe95AD7AuVfsN/6IXIBHP5fWjXthGuo6U3vkkNjdjJGNxjfuMuug SbxjjVV6kZI6gwX2gfTQDKUT+yRjEnqGAyCcFeZXwWGryF1IseOFaNB2ATVKSqn9 oXI7AaI3ZRX3SyfOfyo3TaZEXabS1tfEg4JwIGNpx8WvRxb/X7WZi46be8u0ya4L BDJ6ZIOBf7lpvaI1Dr3dzCuPqjGQ3V/xPwGy5D8+CwKCAQEAuFVUUw6pjn0LIabX QQEd6hzgq7X+H5Q8A7yQIMewMTk7rKvCTH6U+oe1VdZ5DSazqvPp4tjThXyTol9X U3ymUS/mYiotQf0asvpODgjPOAttCGJ9CPhvQEaN3WEioBwg5IaxoMnOt8bF4CJm MdG0ElaNsMACVE8BzgJS7nACEURcxkVWNVsURkNRSgGd/oipLqzkamOoWby67MrN 2DyNuSqs3QzbnBXZdHsVya9fDm8EtSroyF3Lp95hZ/SJ9KqiylSsQTBW9IBrefjf HcDY8fWaMrMZ5V2mXarfsvInCq7VqhwFnAkGhos9ifXGy8MZEG9CcUmakmiFFiCr vXOYOwKCAQADL9Yr/F3dbapIwWGoBLPod3CVAdpwpwnoZZlZRV9zQtOslShlG5U1 XXeMvGgKzEVhyUnhFFCg4rQZUeaQ8Wbh9zRrtkwB8JLRduqUYcWjTE00YP8nM7bu ZNUi3cpAO7Ye4X9I2Ilkyb7N9dkfcE3r6L2ePB8kLX8wQacn7AGmHEDoAJCSQUZQ 5yooijXehk+OchWdW1B9nw1hDOX33AFqgMHun6eWusN3+QJmQFf0TykJicPn4YHx 9eVF7MVY49/XO/5+ZSmEi+iCj8SCaqPboWdvsqWV5SYGotg1jMkn8phOpyuDURTy TXiWpN8la7n0AJMCbCIpkugTLEZ/A41DAoIBAAr73RhOZWDi40D6g+Z2KLHMtLdn xHMEkT0bzRZYlr0WGQpP/GPKJummDHuv/fRq2qXhML7yh7JK8JFxYU94fW2Ya1tx lYa5xtcboQpBLfDvvvI4T4H1FE4kXeOoO46AtZ6dFZyg3hgKlaJkR+pFPLr5Aeak w9+6UCK8v72esoKzCMxQzt3L2euYRt4zTKL3NnrgS7i5w56h2UvP1rDo3P0RVoqc knS1ToamVL2JaPnf/g+gUUVZyya9pyu9RP8MIcd1cvnxZec8JaN89WWnsA2JJbPw stYBnWMvLFabPtPXVcsLrWMEmLFI2yn+fU4YTviwRSs/SrprXDdsqZO2xd8=Note that in both cases, we are looking at the private key only. As you can imagine, this isn’t really something viable. We will need to store that separately, load it from a file, etc. I tried generating Ed25519 keys using the built-in .NET API as well as the Bouncy Castle one. Bouncy Castle is a well known cryptographic library that is very useful. It also supports Ed25519. I spent quite some time trying to get it to work. You can see the code here. Unfortunately, while I’m able to generate a certificate, it doesn’t appear to be valid. Here is what this looks like:Using RSA, however, did generate viable certificates, and didn’t take a lot of code at all: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters private static X509Certificate2 GenerateCertificate(string keyFile, string nmae) { var rsa = RSA.Create(4096); var key = File.ReadAllBytes(keyFile); rsa.ImportRSAPrivateKey(key, out _); var req = new CertificateRequest("cn=" + nmae, rsa, HashAlgorithmName.SHA256, RSASignaturePadding.Pkcs1); var serverCertificate = req.CreateSelfSigned(DateTime.Today.AddDays(-1), DateTime.Today.AddYears(3)); // yuck: https://github.com/dotnet/runtime/issues/23749 return new X509Certificate2(serverCertificate.Export(X509ContentType.Pkcs12, (string) null), (string)null); } view raw temp_cert.cs hosted with ❤ by GitHub We store the actual key in a file, and we generate a self signed certificate on the fly. Great. I did try to use the ECDsa option, which generates a much smaller key, but I run into sever issues there. I could generate the key, but I couldn’t use the certificate, I run into a host of issues around permissions, somehow. You can try to figure out more details from this issue, what I took from that is that in order to use ECDsa on Windows, I would need to jump through hoops. And I don’t know if Ed25519 will even work or how to make it.As an aside, I posted the code to generate the Ed25519 certificates, if you can show me how to make it work, it would be great.So we are left with using RSA, with the largest possible key. That isn’t fun, but we can make it work. Let’s take a look at the connection string again, what if we change it so it will look like this?Server=northwind.database.local:9222;Database=Orders;Server Key Hash=6HvG2FFNFIifEjaAfryurGtr+ucaNgHfSSfgQUi5MHM=;Client Key=client.keyI marked the pieces that were changed. The key observation here is that I don’t need to hold the actual public key here, I just need to recognize it. That I can do by simply storing the SHA256 signature of the public key, that ensures that I always have the same length, regardless of what key type I’m using. For that matter, I think that this is something that I want to do regardless, because if I do manage to fix the other key types, I could still use the same approach. All values in SHA256 will hash to the same length, obviously.After all of that, what do we have?We generate a keypair and store, we let the other side know about the public key hash as the identifier. Then we dynamically generate a certificate with the stored key. Let’s say that we do that once per startup. That certificate is going to be different each run, but we don’t actually care, we can safely authenticate the other side using the (persistent) key pair by validating the public key hash. Here is what this will look like in code from the client perspective: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters var clientCert = GenerateCertificate("client.key"); // the expected server public key hash var certCertPublicKeyHash = "6HvG2FFNFIifEjaAfryurGtr+ucaNgHfSSfgQUi5MHM="; var tcpClient = new TcpClient(); tcpClient.Connect(IPAddress.Loopback, 9222); var clientSsl = new SslStream(tcpClient.GetStream(), false, (sender, certificate, chain, errors) => { var publicKeyHash = SHA256.HashData(certificate.GetPublicKey()); return certCertPublicKeyHash == Convert.ToBase64String(publicKeyHash); }); clientSsl.AuthenticateAsClient(@"foobar", new X509Certificate2Collection(clientCert), SslProtocols.Tls12, false); var reader = new StreamReader(clientSsl); Console.WriteLine(reader.ReadLine()); view raw client.cs hosted with ❤ by GitHub And here is what the server is doing: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters private static async Task<Stream> HandleConnection( TcpClient client, X509Certificate2 serverCertificate, HashSet<string> allowedClientKeys) { var serverSsl = new SslStream(client.GetStream(), false, (_, _, _, _) => true); // allow all clients certificates await serverSsl.AuthenticateAsServerAsync(serverCertificate, clientCertificateRequired: true, SslProtocols.Tls12, false); var clientPublicKeyHash = Convert.ToBase64String(SHA256.HashData(serverSsl.RemoteCertificate?.GetPublicKey() ?? Array.Empty<byte>())); await using var writer = new StreamWriter(serverSsl); if (allowedClientKeys.Contains(clientPublicKeyHash) == false) { // failed to authenticate, send a proper error message await using var _ = serverSsl; await writer.WriteLineAsync("ERROR Client is not authorized: " + clientPublicKeyHash); await serverSsl.FlushAsync(); return null; // client failure } await writer.WriteLineAsync("OK"); return serverSsl; } view raw server.cs hosted with ❤ by GitHub As you can see, this is very similar to what I ended up with in my secured protocol, but it utilizes TLS and all the weight behind it to achieve the same goal. A really important aspect of this is that we can actually connect to the server using something like openssl s_client –connect, which can be really nice for debugging purposes.However, the weight of TLS is also an issue. I failed to successfully create Ed25519 certificates, which was my original goal. I couldn’t get it work using ECDsa certificates and had to use RSA ones with the biggest keys. It was obvious that a lot of those issues are because we are running on a particular operating system, which means that this protocol is subject to the whims of the environment still. I have also not done everything that is required to ensure that there will not be any remote calls as part of the TLS handshake in this case, that can actually be quite complex to ensure, to be honest. Given that these are self signed (and pretty bare boned) certificates, there shouldn’t be any, but you know what they say about assumptions .The end goal is that we are now able to get roughly the same experience using TLS as the underlying communication mechanism, without dealing with certificates directly. We can use standard tooling to access the server, which is great. Note that this doesn’t address something like browser access, which will not be trusted, obviously.  For that, we have to go back to Let’s Encrypt or some other trusted CA, and we are back in PKI land.

Design Patterns Overview

by Ardalis

posted on: October 12, 2021

Design patterns provide reusable approaches to common problems and allow for higher level discussions of software design. Learn the basics…Keep Reading →

Performance architecture talk

by Oren Eini

posted on: October 11, 2021

I spoke at the Donetos conference about how to design your system for high performance, using RavenDB’s story as the backdrop. I think it went great.