skip to content
Relatively General .NET

Recording

by Oren Eini

posted on: March 02, 2022

Dejan is talking about RavenDB here:

Using the .NET JIT to reduce abstraction overhead

by Oren Eini

posted on: March 01, 2022

I ran into this recently and I thought that this technique would make a great post. We are using that extensively inside of RavenDB to reduce the overhead of abstractions while not limiting our capabilities. It is probably best that I’ll start with an example. We have a need to perform some action, which needs to be specialized by the caller. For example, let’s imagine that we want to aggregate the result of calling multiple services for a certain task. Consider the following code: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters using System.Buffers; using System.Threading.Tasks; using System.Net.Http; public class Orchestrator { string[] urls; HttpClient client = new(); public Orchestrator(string[] urls) { this.urls = urls; } private static readonly Task<string> CompletedTask = Task.FromResult(string.Empty); public async Task<T> Execute<T>(Func<string, HttpRequestMessage> factory, Func<Memory<string>, T> combine) { var tasks = ArrayPool<Task<string>>.Shared.Rent(urls.Length); for (int i = 0; i < urls.Length; i++) { tasks[i] = client.SendAsync(factory(urls[i])) .ContinueWith(t=> t.Result.Content.ReadAsStringAsync()) .Unwrap(); } for (int i = urls.Length; i < tasks.Length; i++) { tasks[i] = CompletedTask; } await Task.WhenAll(tasks); var results = ArrayPool<string>.Shared.Rent(urls.Length); for (int i = 0; i < urls.Length; i++) { results[i] = tasks[i].Result; } ArrayPool<Task<string>>.Shared.Return(tasks); var result = combine(new Memory<string>(results, 0, urls.Length)); ArrayPool<string>.Shared.Return(results); return result; } } view raw OrchestratorV1.cs hosted with ❤ by GitHub As you can see, the code above sends a single request to multiple locations and aggregates the results. The point is that we can separate the creation of the request (and all that this entails) from the actual logic for aggregating the results. Here is a typical usage for this sort of code: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters var urls = new[] { "https://google.com", "https://bing.com" }; var orch = new Orchestrator(urls); var term = "fun"; var t = await orch.Execute<string>( url => new HttpRequestMessage(HttpMethod.Get,url + "/?q=" + term), results => { var sb = new StringBuilder(); var span = results.Span; for(var i =0;i< span.Length; i++) { sb.AppendLine(span[i]); } return sb.ToString(); }); view raw Usage.cs hosted with ❤ by GitHub You can notice that the code is fairly simple, and uses lambdas for injecting the specialized behavior into the process. That leads to a bunch of problems: Delegate / lambda invocation is more expensive. Lambdas need to be allocated. They capture state (and may capture more and for a lot longer than you would expect). In short, when I look at this, I see performance issues down the road. But it turns out that I can write very similar code, without any of those issues, like this: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters using System.Buffers; using System.Threading.Tasks; using System.Net.Http; public interface IMergedOperation<T> { T Combine(Memory<string> results); HttpRequestMessage Create(string url); } public class Orchestrator { string[] urls; HttpClient client = new(); public Orchestrator(string[] urls) { this.urls = urls; } private static readonly Task<string> CompletedTask = Task.FromResult(string.Empty); public async Task<TResult> Execute<TMerger, TResult>(TMerger merger) where TMerger : struct, IMergedOperation<TResult> { var tasks = ArrayPool<Task<string>>.Shared.Rent(urls.Length); for (int i = 0; i < urls.Length; i++) { tasks[i] = client.SendAsync(merger.Create(urls[i])) .ContinueWith(t=> t.Result.Content.ReadAsStringAsync()) .Unwrap(); } for (int i = urls.Length; i < tasks.Length; i++) { tasks[i] = CompletedTask; } await Task.WhenAll(tasks); var results = ArrayPool<string>.Shared.Rent(urls.Length); for (int i = 0; i < urls.Length; i++) { results[i] = tasks[i].Result; } ArrayPool<Task<string>>.Shared.Return(tasks); var result = merger.Combine(new Memory<string>(results, 0, urls.Length)); ArrayPool<string>.Shared.Return(results); return result; } } view raw OrchestratorV2.cs hosted with ❤ by GitHub Here, instead of passing lambdas, we pass an interface. That has the same exact cost as lambda, in fact. However, in this case we also specify that this interface must be implemented by a struct (value type). That leads to really interesting behavior, since at JIT time, the system knows that there is no abstraction here, it can do optimizations such as inlining or calling the method directly (with no abstraction overhead). It also means that any state that we capture is done so explicitly (and we won’t be tainted by other lambdas in the method). We still have good a separation between the process we run and the way we specialize that, but without any runtime overhead on this. The code itself is a bit more verbose, but not too onerous.

Performance optimizations in production

by Oren Eini

posted on: February 28, 2022

Date range queries can be quite expensive for RavenDB. Consider the following query: from index 'Users/Search' where search(DisplayName, "Oren") and CreationDate between "2008-10-13T07:18:01.623" and "2018-10-13T07:18:01.623" include timings() The root issue is that we have a compound query here, we use full text search on the left but then need to match it on the right. The way Lucene works, we have to compute the set of all the documents that match the date range. If we have a lot of documents in that range, we have to scan through a lot of values here. We spent a lot of time and effort optimizing date queries in RavenDB. Such issues also impacted heavily the design of our next-gen indexing capabilities (but more on that when it matures enough to discuss). One of the primary design principles of RavenDB is that it learns from previous usage, and we realized that date ranges in queries are likely to repeat often. So we take advantage of that. The details are a bit complex and require that you’ll understand how Lucene stores its data in immutable segments. We are able to analyze queries on repeating date ranges and remember them, so next time we use the same type of date range, we’ll already have the set of matching documents ready. That feature was deployed to address a specific customer scenario, where they do a lot of wide date range queries and it had a big impact on that. Last week we ran into some funny metrics for a completely different customer, with a very different scenario. You can probably tell at what point they moved to the updated version of RavenDB and were able to take advantage of this feature: The really nice thing about this, from my perspective, is that none of us even considered the impact that feature would have for this scenario. They upgraded to the latest version to get access to the new features, and this is just sitting in the background, pushing their CPU utilization to near zero. That’s the kind of news that I love to get.

Executing GitHub Actions jobs or steps only when specific files change

by Gérald Barré

posted on: February 28, 2022

Sometimes you want to execute a step or a job only when some files are modified. For instance, you may want to lint and deploy the documentation only when a file under docs is modified. This scenario is not directly supported by GitHub Actions. However, you can do it manually by using git, PowerShe

RavenDB

by Oren Eini

posted on: February 25, 2022

Dejan Miličić is talking with André Baltieri about data modeling and data persistency.

Badly implementing encryption

by Oren Eini

posted on: February 24, 2022

This series has been going on quite a bit longer than I intended it to be. Barring any new developments or questions, I think that this will be the last post I’ll write on the topic. In a previous post, I implemented authenticated encryption, ensuring that we’ll be able to clearly detect if the ciphertext we got has been modified along the way. That is relevant because we have to think about malicious actors but we also have to consider things like bit flips, random hardware errors, etc. In most crypto systems, we have to pass some metadata about the encrypted messages that we pass. Right now, this is strictly outside our scope, but it turns out that there is a compelling reason to consider that plain text data as well. For example, let’s say that I’m sending a number of messages. I have to include the message length and its position in the set of messages I sent in the clear. Otherwise, the receiver might not be able to make sense of that. When we need to decrypt the message, we want to include that additional information (which wasn’t encrypted) as well. The reason for that is simple, we want to ensure that the other data hasn’t been modified using the same cryptographic tools we already have. It turns out that this is quite simple, check out the code: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters pub fn encryptSivWithNonce(key: [32]u8, nonceKey: []u8, additionalData: []u8, plainText: []const u8, cipherText: []u8) void { var nonce: [HmacMd5.mac_length]u8 = undefined; HmacMd5.create(&nonce, plainText, nonceKey); var mca = MyCryptoAlgo.initWithNonce(key, nonce); mca.encryptBlock(plainText, cipherText); // additional data hashing mca.macGen.update(additionalData); mca.finalize(cipherText[plainText.len..]); } pub fn decrypt(key: [32]u8, additionalData: []u8, cipherText: []u8, plainText: []u8) !void { if (cipherText.len < ExtraLen) return error.CihpherTextBufferTooSmall; if (cipherText.len - ExtraLen > plainText.len) return error.PlainTextBufferTooSmall; var nonce: [HmacMd5.mac_length]u8 = undefined; std.mem.copy(u8, &nonce, cipherText[cipherText.len - ExtraLen .. cipherText.len - NonceLen]); var dec = MyCryptoAlgo.initWithNonce(key, nonce); var msgText = cipherText[0 .. cipherText.len - ExtraLen]; var expectedMac: [16]u8 = undefined; std.mem.copy(u8, &expectedMac, cipherText[cipherText.len - NonceLen .. cipherText.len]); var myMac: [16]u8 = undefined; dec.macGen.update(msgText); dec.macGen.update(additionalData); dec.macGen.final(&myMac); if (std.crypto.utils.timingSafeEql([16]u8, expectedMac, myMac) == false) { return error.InvalidMac; } dec.xorWithKeyStream(msgText, plainText); } view raw aead.zig hosted with ❤ by GitHub We added a parameter for the encryptSivWithNonce() and decrypt() functions that has the buffer of all the associated data for this message. And all we need to do is add that to the MAC computation as well. On the decrypt(), we do the exact same. We compute the hash from the encrypted text and the additional data and abort if they aren’t an exact match. And with this in place, we implemented a (probably very bad) encryption system from a single primitive (MD5) and brought it to roughly modern standards of AEAD encryption (Authenticated Encryption, Additional Data). I want to emphasize that this entire series is meant primarily to go over the details of how you build and use an encryption system, not to actually build a real one. I didn’t do any analysis on how secure such a system would be, and I wouldn’t want to trust this with anything beyond toying around. If you have any references on similar systems, I would be very happy to learn about that, I doubt that I’m the first person who tried to build stream cipher from MD5, after all.

Badly implementing encryption

by Oren Eini

posted on: February 23, 2022

I mentioned in a previous post that nonce reuse is a serious problem. It is enough to reuse a nonce even once and you can have catastrophic results at your hands. The problem occurs, interestingly enough, when we are able to capture two separate messages generated with the same key & nonce. If we capture the same message twice, that is not an issue (we can XOR the values, they will be all zeroes). The question is whether there is something that can be done about this. The answer to that is yes, we can create a construction that would be safe from nonce reuse. This is called SIV mode (Syntactic IV). The way to do that is to make the nonce itself depend on the value that we are encrypting. Take a look at the following code: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters pub fn encryptSiv(key: [32]u8, plainText: []const u8, cipherText: []u8) void { var nonceKey: [HmacMd5.mac_length]u8 = undefined; std.crypto.random.bytes(&nonceKey); encryptSivWithNonce(key, &nonceKey, plainText, cipherText); } pub fn encryptSivWithNonce(key: [32]u8, nonceKey: []u8, plainText: []const u8, cipherText: []u8) void { var nonce: [HmacMd5.mac_length]u8 = undefined; HmacMd5.create(&nonce, plainText, nonceKey); var mca = MyCryptoAlgo.initWithNonce(key, nonce); mca.encryptBlock(plainText, cipherText); mca.finalize(cipherText[plainText.len..]); } view raw encryptSiv.zig hosted with ❤ by GitHub The idea is that we get a nonce, as usual, but instead of relying on the nonce, we’ll compute a hash of the plain text using the nonce as a key. Aside from that, the rest of the system behaves as usual. In particular, there are no changes to the decryption. Let’s see what the results are, shall we? With a separate nonce per use: attack at dawn! 838E1CE1A64D97E114237DE161A544DA5030FC5ECAB1C20D34AF838634D1C591AE208FC0AEE706690669E9F56F45C1 attack at dusk! EEA7DE8A51A06FE6CA9374CDDEC1053249F8B1F0BF1995A0EEEE7D6EBF68868ECAE7CBEFD6EE23017480ACD494D634 Now, what happens when we use the same nonce? attack at dusk! 0442EFA977919327C92B47C7F6A0CD617AE4FD3138DF07D45994EBC2C4B709ACDE1130422924B7206354D03569FDAA attack at dawn! 324A996C22F7FFDE62596C0E9EE37D7EE1F89569A10A1188BA4A03EE7B8C47DF347A20D1B73EB4523D3511F2F46FF2 As you can see, the values are completely different, even though we used the same key and nonce and the plain text is mostly the same. Because we are generating the actual nonce we use from the hash of the input, reusing the nonce with different data will result in a wildly different nonce being actually used. We are safe from the catastrophe that is nonce reuse. With SIV mode, we are paying with an additional hashing pass over the plain text data. In return, we have the following guarantees: If the nonce isn’t reused, we have nothing to worry about. If the nonce is reused, an attacker will not be able to learn anything about the content of the message. However, an attacker may be able to detect that the same message is being sent, if the nonce is reused. Given the cost of nonce reuse without SIV, it is gratifying to see that the worst case scenario for SIV with nonce reuse is that an adversary can detect duplicate messages. I’m not sure how up to date this is, but this report shows that SIV adds about 1.5 – 2.2 cycles to the overall cost of encryption. Note that this is for actual viable implementations, instead of what I’m doing, which is by design, not a good idea.

Badly implementing encryption

by Oren Eini

posted on: February 22, 2022

In the previous post, I showed some code that compared two MAC values (binary buffers) and I mentioned that the manner in which I did that was bad. Here is the code in question: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters // bad code if (std.mem.eql(u8, mac, &myMac) == false) { return error.InvalidMac; } // good code if (std.crypto.utils.timingSafeEql(u8, mac, &myMac) == false) { return error.InvalidMac; } view raw options.zig hosted with ❤ by GitHub When you are looking at code that is used in a cryptographic context, you should be aware that any call that compares buffers (or strings) cannot short circuit. What do I mean by that? Let’s look at the implementation of those two functions: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters pub fn timingSafeEql(a: []u8, b: []u8) bool { var acc :u8=0; for (a) |x, i| { acc |= x ^ b[i]; } return acc ==0; } pub fn eql(a: []const u8, b: []const u8) bool { if (a.len != b.len) return false; if (a.ptr == b.ptr) return true; for (a) |item, index| { if (b[index] != item) return false; } return true; } view raw impl.zig hosted with ❤ by GitHub Those two functions are doing the same thing, but in a very different manner. The issue with eql() is that it will stop at the first mismatch byte, while timingSafeEql() will always scan through the two buffers first and then return the result. Why do we need that? Well, the issue is that if I can time (and you can, even over the network) the duration of a call like that, I’ll be able to test various values until I match whatever secret value the code is comparing against. In this case, I don’t believe that the use of eql() is an actual problem. We tested that on the output of HMAC operation vs. the expected value. The caller has no way to control the HMAC computation and already knows what we are comparing against. I can’t think of any reason where that would be a problem. However, I’m not a cryptographer and any call to buffer comparison in crypto related code should use a constant time method. For that matter, side channels are a huge worry in cryptography. AES, for example, is nearly impossible to implement in software at this point, because it is vulnerable to timings side channels and requires the hardware to help here. Other side channels include watching caches, power signatures and more. I don’t actually have much to say about this, except that when working with cryptography, even something as simple as multiplication is suspect, because it may not complete in constant time. As a good example of the problem, see this page.

Badly implementing encryption

by Oren Eini

posted on: February 21, 2022

In the previous post I showed how we can mess around with the encrypted text, resulting in a valid (but not the original) plain text. We can use that for many nefarious reasons, as you can imagine. Luckily, there is a straightforward solution for this issue. We can implement something called MAC (message authentication code) to ensure that the encrypted data wasn’t tampered with. That is pretty easy to do, actually, since we have HMAC already, which is meant for exactly this purpose. The issue here is an interesting one, what shall we sign? Here are the options: Sign the plain text of the message, using a hashed key function (HMAC-MD5, in our case). Because we are using the secret key to compute the hash, just looking at the hashed value will not tell us anything about the plain text (for example, if we were using just MD5, we could use rainbow tables to figure out what the plain text was from the hash). Since there is no security issue with making the signature public, we can just append that to the output of the encryption as plain text. At least, I don’t think it does. I’ll remind you again that I’m not a cryptographer by trade. Sign the plain text of the message (using a hashed key function or a regular cryptographic hash function) and append the hash (encrypted) to the output message. Sign the encrypted value of the message, and append that hash to the output message. A visualization might make it easier to understand. If you want to read more, there is a great presentation here. The first two options are bad. Using those methods will leak your data in various ways. There is something that is apparently called the Cryptographic Doom principle, which is very relevant here. The idea is simple, we don’t trust the encrypted value, it may have been modified by an adversary. The first two options that we have require us to first take an action (decrypting the data) before we authenticate the message. We can then use various tricks to rip apart the whole scheme. That means that the very first thing we do is verify that the encrypted text we were handled was indeed signed by a trusted party (that has the secret key). If you’ll look closely at the image above, you can see that I’m using two keys here, instead of one: key1 and key2. What is that about? In cryptography, there is a strong reluctance to reuse the same key in different contexts. The issue is that if we use a single key in multiple scenarios (such as encryption and authentication), a weakness in one of them can be exploited in the other. Remember, cryptography is just math, and the fear is that given two values that were computed with the same key, but using different algorithms, you can do something with that. That has led to practical attacks in the past, so the general practice is to avoid reusing keys. The good thing is that given a single cryptographic key, it is easy to generate a new key using a key derivation function. I’m still going to limit myself to HMAC-Md5 only (remember, none of this code is meant to actually be used), so I can derive a new key from an existing one using the following mechanism: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters fn deriveKey(key: *const [32]u8, nonce: []const u8, domain: []const u8, newKey: *[32]u8) void { var buf: [4]u8 = undefined; std.mem.writeIntNative(u32, &buf, 1); var mac: [16]u8 = undefined; var hmacMd5 = HmacMd5.init(key); hmacMd5.update(domain); hmacMd5.update(nonce); hmacMd5.update(&buf); hmacMd5.final(&mac); std.mem.copy(u8, newKey, &mac); std.mem.writeIntNative(u32, &buf, 2); hmacMd5 = HmacMd5.init(key); hmacMd5.update(nonce); hmacMd5.update(domain); hmacMd5.update(&buf); hmacMd5.final(&mac); std.mem.copy(u8, newKey[16..], &mac); } view raw deriveKey.zig hosted with ❤ by GitHub The idea is that we use the HMAC and the static domain string we get to generate the new key. In this case, we actually use it twice, with the nonce being used to inject even more entropy into the mix. Since HMAC-Md5 outputs 16 bytes and I need a 32 bytes key, I’m doing that twice, with a different counter value each time. I also rearrange the order of the (nonce, domain) and (domain, nonce) fields on each hashing attempt to make it more interesting.  A reminder: I didn’t spend any time trying to figure out what kind of security this sort of system brings. It looks very much like what Sodium does for key derivation, but I wouldn’t trust it with anything. With that in place, here is the new code for encryption: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters const HmacMd5 = std.crypto.auth.hmac.HmacMd5; const MyCryptoAlgo = struct { state: [HmacMd5.mac_length]u8 = undefined, nonce: [HmacMd5.mac_length]u8 = undefined, keyEnc: [HmacMd5.key_length]u8 = undefined, keyMac: [HmacMd5.key_length]u8 = undefined, macGen: HmacMd5 = undefined, pos: u8 = 0, count: [1]u64 = undefined, pub const NonceLen = HmacMd5.mac_length; pub const MacLen = HmacMd5.mac_length; pub const ExtraLen = MacLen + NonceLen; pub fn initWithNonce(key: [HmacMd5.key_length]u8, nonce: [HmacMd5.mac_length]u8) MyCryptoAlgo { var self = MyCryptoAlgo{}; self.count[0] = 0; std.mem.copy(u8, &self.nonce, &nonce); deriveKey(&key, "encryption", &nonce, &self.keyEnc); deriveKey(&key, "authentication", &nonce, &self.keyMac); var md5 = HmacMd5.init(&self.keyEnc); md5.update(&self.nonce); md5.update(std.mem.sliceAsBytes(&self.count)); md5.final(&self.state); self.macGen = HmacMd5.init(&self.keyMac); self.macGen.update(&self.nonce); return self; } pub fn init(key: [HmacMd5.key_length]u8) MyCryptoAlgo { var nonce: [HmacMd5.mac_length]u8 = undefined; std.crypto.random.bytes(&nonce); return initWithNonce(key, nonce); } pub fn encrypt(key: [32]u8, plainText: []const u8, cipherText: []u8) void { var mca = MyCryptoAlgo.init(key); mca.encryptBlock(plainText, cipherText); mca.finalize(cipherText[plainText.len..]); } pub fn finalize(self: *MyCryptoAlgo, data: []u8) void { std.mem.copy(u8, data, &self.nonce); var mac: [16]u8 = undefined; self.macGen.final(&mac); std.mem.copy(u8, data[NonceLen..], &mac); } pub fn encryptBlock(self: *MyCryptoAlgo, input: []const u8, output: []u8) void { self.xorWithKeyStream(input, output); self.macGen.update(output[0..input.len]); // process the encrypted stream } fn xorWithKeyStream(self: *MyCryptoAlgo, input: []const u8, output: []u8) void { for (input) |c, i| { if (self.pos == self.state.len) { self.genKeyStreamBlock(); } output[i] = self.state[self.pos] ^ c; self.pos += 1; } } fn genKeyStreamBlock(self: *MyCryptoAlgo) void { var md5 = HmacMd5.init(&self.keyEnc); md5.update(&self.nonce); self.count[0] += 1; md5.update(std.mem.sliceAsBytes(&self.count)); md5.final(&self.state); self.pos = 0; } }; view raw badEncryption.zig hosted with ❤ by GitHub We have a lot going on here. In the initWithNonce() function, we generate the derived keys for two domains. Then we generate the block of key stream, as we previously did. The last stage in the initWithNonce() function is initializing the MAC computation. Note that in addition to using a derived key for the MAC, I’m also adding the nonce as the first thing that we’ll hash. That should have no effect on security, but it ties the output hash even closer to this specific encryption. In the xorWithKeyStream() function, you’ll note that I’m now passing both an input and an output buffer, aside from that, this is exactly the same as before (with the actual key stream generation moved to genKeyStreamBlock()). Things get interesting in the encrypytBlock() function. There we XOR the value that we encrypt with the key stream and push that to the output. We also add the encrypted value to the MAC that we generate. The idea with encryptBlock() is to allow you to build an encrypted message in a piecemeal fashion. Once you are done with the data you want to encrypt, you need to call to finalize(). That would copy the nonce and complete the computation of the MAC of the encrypted portion. The encrypt() function is provided in order to make it easier to work with the data when you want to encrypt a single buffer. (And yes, I’m not doing any explicit bounds check here, relying on Zig’s to panic if we need to. I mentioned that this isn’t production level code already?) For encryption, we can pass either a single buffer to encrypt or we can pass it in pieces. For decryption, on the other hand, the situation isn’t as simple. To decrypt the data properly, we first need to verify that it wasn’t modified. That means that to decrypt the data, we need all of it. The API reflects this behavior: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters pub fn decrypt(key: [32]u8, cipherText: []u8, plainText: []u8) !void { if (cipherText.len < ExtraLen) return error.CihpherTextBufferTooSmall; if (cipherText.len - ExtraLen > plainText.len) return error.PlainTextBufferTooSmall; var nonce: [HmacMd5.mac_length]u8 = undefined; std.mem.copy(u8, &nonce, cipherText[cipherText.len - ExtraLen .. cipherText.len - NonceLen]); var mac = cipherText[cipherText.len - NonceLen .. cipherText.len]; var dec = MyCryptoAlgo.initWithNonce(key, nonce); var msgText = cipherText[0 .. cipherText.len - ExtraLen]; var myMac: [16]u8 = undefined; dec.macGen.update(msgText); dec.macGen.final(&myMac); if (std.mem.eql(u8, mac, &myMac) == false) { return error.InvalidMac; } dec.xorWithKeyStream(msgText, plainText); } view raw decrypt.zig hosted with ❤ by GitHub The decrypt() function does do some checks. We are dealing here with input that is expected to be malicious. As such, the first thing that we do is to extract the MAC and the nonce from the cipher text buffer. I decided it would be simpler to require that as a single buffer (although, as you can imagine, it would be very simple to change the API to have that as independent values). Once we have the nonce, we can initialize the struct with the key and nonce (which will also derive the keys and setup the macGen properly). The next step is to compute the hash over the encrypted text and verify that it matches our expectation. Yes, I’m using eql() here for the comparison. This is a short circuiting operation, and I’m doing that intentionally so I can talk about this in a future post. If the MAC that I compute is a match to the MAC that was provided, we know that the message hasn’t been tampered with. At that point we can simply XOR the encrypted text with the key stream to get the original value back. A single bit out of line with this model will ensure that the decryption fails. What is more, note that we don’t do anything with the decryption until we validate the provided MAC and cipher text. To do anything else would invite cryptographic doom, so it is nice that we were able to avoid it. In the next post, I’m going to cover timings attacks.