skip to content
Relatively General .NET

Domain Modeling - Encapsulation

by Ardalis

posted on: May 18, 2022

Domain models should encapsulate logic operations so that there is only one way to perform a given logical operation. That means avoiding…Keep Reading →

Copying a collection: ToList vs ToArray

by Gérald Barré

posted on: May 16, 2022

It's common to use ToList() or ToArray() to copy a collection to a new collection. I've seen many comments on the web about which one is the most performant without any proof. So, it was time to run a benchmark.C#copy[MemoryDiagnoser] public class CloneCollectionBenchmark { private byte[] _arra

Domain Modeling - Anemic Models

by Ardalis

posted on: May 11, 2022

When building a domain model, proper object-oriented design and encapsulation should be applied as much as possible. Some teams choose to…Keep Reading →

Who can give a refund?

by Oren Eini

posted on: May 10, 2022

Consider an eCommerce system where customers can buy stuff. Part of handling commerce is handling faults. Those range from “I bought the wrong thing” to “my kid just bought a Ferrari”. Any such system will need some mechanism to handle fixing those faults. The simplest option we have is the notion of refunds. “You bought by mistake, we can undo that”. In many systems, the question is then “how do we manage the process of refunds”? You can do something like this: So a customer requests a refund, it is processed by the Help Desk and is sent for approval by Finance, who is then consulting Fraud and then get sign off by the vice –CFO. There are about 12 refunds a quarter, however. Just the task of writing down the rules for processing refunds costs more than that. Instead, a refund policy can state that anyone can request a refund within a certain time frame. At which point, the act of processing a refund becomes: Is there a potential for abuse? Probably, but it is going to be caught naturally as we see the number of refunds spike over historical levels. We don’t need to do anything. In fact, the whole idea relies on two important assumptions: There is a human in the loop They are qualified to make decisions and relied upon to try to do the right thing Trying to create a process to handle this is a bad idea if the number of refunds is negligible. It costs too much, and making refunds easy is actually a goal (since that increases trust in the company as a whole).

Enabling IntelliSense for GitHub Actions workflows in VS Code

by Gérald Barré

posted on: May 09, 2022

If you edit a GitHub Actions workflow in GitHub, you can use auto-completion to quickly know what is possible and detect errors.I don't often use the GitHub editor. Instead, I prefer to use VS Code to edit my files. By default, VS Code doesn't support IntelliSense for GitHub Actions workflows. But,

Challenge

by Oren Eini

posted on: May 06, 2022

In my previous post, I asked why this change would result in a better performing system, since the total amount of work that is done is the same: The answer is quite simple. The amount of work that our code is doing is the same, sure, but that isn’t all the code that runs. In the first version, we would allocate the string, and then we’ll start a bunch of async operations. Those operations are likely to take some time and involve I/O (otherwise, they wouldn’t be async). It is very likely that in the meantime, we’ll get a GC run. At that point, the string pointed to be the ids variable will be promoted (since it survived a GC). That means that it would be collected much later. Using the new code, the scope of the ids string is far shorter. That means that the GC is more likely to catch it very early and significantly reduce the cost of releasing the memory.

Challenge

by Oren Eini

posted on: May 05, 2022

Take a look at the following code: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters public async Task<ComputeResult> Execute(List<Item> items) { var sw = Stopwatch.StartNew(); var ids = string.Join(", ", items.Select(x=>x.Id)); foreach(var item in items) { await Write(item); } await FlushToClient(); var result = await ReadResult(); log.Info($"Executed computation for '{ids}' in {sp.Elapsed}"); return result; } view raw bad.cs hosted with ❤ by GitHub If we move line 4 to line 11, we can improve the performance of this code significantly. Here is what this looks like:The question is, why?The exact same amount of work is being done in both cases, after all. How can this cause a big difference?