Downloading artifacts from Azure DevOps using .NET
by Andrew Lock
posted on: August 31, 2021
In this post I show how to use the Azure DevOps REST API to view the results of builds from Pipelines and how to download the artifacts…
by Andrew Lock
posted on: August 31, 2021
In this post I show how to use the Azure DevOps REST API to view the results of builds from Pipelines and how to download the artifacts…
by Gérald Barré
posted on: August 30, 2021
We often use NuGet through Visual Studio or dotnet CLI, but you can also use the same using your own application. In this post, I'm going to show multiple examples of using the .NET client library for NuGet.To start using the NuGet client library, you need to add a reference to the NuGet packages:c
by Ben Foster
posted on: August 28, 2021
ASP.NET 6.0 introduces an alternative way to build HTTP APIs, using the aptly named “Minimal APIs”. This post provides a step-by-step guide on how to translate traditional MVC concepts to this new approach.
by Oren Eini
posted on: August 27, 2021
I recently run into a bit of code that made me go: Stop! Don’t you dare going this way! The reason that I had such a reaction for the code in question is that I have seen where such code will lead you, and that is not anywhere good. The code in question? This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters public static List<T> LoadData<T>(string connectionName, string sqlQuery, Dictionary<string, object> parameters, string failedMessage) { var dynamicParameters = new DynamicParameters(parameters); List<T> results = null; int waitTime = Int32.Parse(System.Configuration.ConfigurationManager.AppSettings["WaitTime"].ToString()); int retry = 0; while (retry < 5) { try { using (var connection = new SqlConnection(ConnectionString(connectionName))) { results = connection.Query<T>(sqlQuery, dynamicParameters, commandTimeout: 15 * 60).ToList<T>(); } break; } catch (Exception ex) { retry++; Logger.Log.Error($"{failedMessage}, failed on retry {retry}", ex); System.Threading.Thread.Sleep(waitTime * retry); } } return results; } view raw opps.cs hosted with ❤ by GitHub This is a pretty horrible thing to do to your system. Let’s count the ways: Queries are happening fairly deep in your system, which means that you’re now putting this sort of behavior in a place where it is generally invisible for the rest of the code. What happens if the calling code also have something similar? Now we got retries on retries. What happens if the code that you are calling has something similar? Now we got retries on retries on retries. You can absolutely rely on the code you are calling to do retries. If only because that is how TCP behaves. But also because there are usually resiliency measures implemented. What happens if the error actually matters. There is no exception throw in any case, which means that important information is written to the log, which no one ever reads. There is no distinction of the types of errors where retry may help and where it won’t. What is the query has side effects? For example, you may be calling a stored procedure, but multiple times. What happens when you run out of retries? The code will return null, which means that the calling code will like fail with NRE. What is worst, by the way, is that this piece of code is attempting to fix a very specific issue. Being unable to reach the relevant database. For example, if you are writing a service, you may run into that on reboot, your service may have started before the database, so you need to retry a few times to the let the database to load. A better option would be to specify the load order of the services. Or maybe there was some network hiccup that you had to deal with? That would sort of work, and probably the one case where this will work. But TCP already does that by resending packets, you are adding this again and it is building up to be a nasty case. When there is an error, your application is going to sulk, throw strange errors and refuse to tell you what is going on. There are going to be a lot of symptoms that are hard to diagnose and debug. To quote Release It!: Connection timeouts vary from one operating system to another, but they’re usually measured in minutes! The calling application’s thread could be blocked waiting for the remote server to respond for ten minutes! You added a retry on top of that, and then the system just… stops. Let’s take a look at the usage pattern, shall we? This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters foreach(var item in Query<Orders>(conStr, "SELECT * FROM Orders where UserId = @userId", new {["UserId"] = userID), "load queries")) { // do something here. } view raw stop.cs hosted with ❤ by GitHub That will fail pretty badly (and then cause a null reference exception). Let’s say that this is a service code, which is called from a client that uses a similar pattern for “resiliency”. Question – what do you think will happen the first time that there is an error? Cascading failures galore. In general, unknown errors shouldn’t be handled locally, you don’t have a way to do that here. You should raise them up as far as possible. And yes, showing the error to the user is general better than just spinning in place, without giving the user any feedback whatsoever.
by Oren Eini
posted on: August 26, 2021
I run into a task that I needed to do in Go, given a PFX file, I needed to get a tls.X509KeyPair from that. However, Go doesn’t have support for PFX. RavenDB makes extensive use of PFX in general, so that made things hard for us. I looked into all sorts of options, but I couldn’t find any way to manage that properly. The nearest find was the pkcs12 package, but that has support for only some DER format, and cannot handle common PFX files. That was a problem.Luckily, I know how to use OpenSSL, but while there are countless examples on how to use OpenSSL to convert PFX to PEM and the other way around, all of them assume that you are using that from the command line, which isn’t what we want. It took me a bit of time, but I cobbled together a one off code that does the work. The code has a strange shape, I’m aware, because I wrote it to interface with Go, but it does the job. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters #include <string.h> #include <openssl/pkcs12.h> #include <openssl/pem.h> #include <openssl/bio.h> #include <openssl/err.h> #include <openssl/x509.h> void init_errors(){ ERR_load_crypto_strings(); } int get_pem_size(void * pem) { char * buf; int len = BIO_get_mem_data(pem, & buf); return len; } void copy_pem_to(void * pem, void * dst, int size) { char * buf; int len = BIO_get_mem_data(pem, & buf); memcpy(dst, buf, len > size ? size : len); } void free_pem(void * pem) { BIO_free(pem); } char * pfx_to_pem(void * data, long size, char * pwd, void ** key, void ** crt) { char * rc = NULL; BIO * bio = NULL; PKCS12 * p12 = NULL; EVP_PKEY * pkey = NULL; X509 * cert = NULL; STACK_OF(X509) * ca = NULL; BIO * key_bio = NULL; BIO * crt_bio = NULL; bio = BIO_new_mem_buf(data, size); if (!bio) { rc = "Unable to allocate memory buffer"; goto cleanup; } p12 = d2i_PKCS12_bio(bio, NULL); if (!p12) { rc = "Unable to read certificate"; goto cleanup; } if (!PKCS12_parse(p12, pwd, & pkey, & cert, & ca)) { rc = "Unable to parse certificate"; goto cleanup; } key_bio = BIO_new(BIO_s_mem()); if (!key_bio) { rc = "Out of memory, cannot create mem BIO for key"; goto cleanup; } if (!PEM_write_bio_PrivateKey(key_bio, pkey, NULL, NULL, 0, NULL, NULL)) { rc = "Failed to write PEM key output"; goto cleanup; } * key = key_bio; crt_bio = BIO_new(BIO_s_mem()); if (!crt_bio) { rc = "Out of memory, cannot create mem BIO for cert"; goto cleanup; } if (!PEM_write_bio_X509(crt_bio, cert)) { rc = "Failed to write PEM crt output"; goto cleanup; } * crt = crt_bio; goto success; cleanup: if (key_bio) BIO_free(key_bio); if (crt_bio) BIO_free(crt_bio); success: if (bio) BIO_free(bio); if (p12) PKCS12_free(p12); if (pkey) EVP_PKEY_free(pkey); if (cert) X509_free(cert); return rc; } view raw pfx_to_pem.c hosted with ❤ by GitHub Now, from Go, I can run the following: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters //#cgo CFLAGS: "-IC:/Program Files/OpenSSL-Win64/include" //#cgo LDFLAGS: "-LC:/Program Files/OpenSSL-Win64/lib" -llibcrypto // #include "pfx.c" import "C" var initialized bool // from: // https://github.com/spacemonkeygo/openssl/blob/c2dcc5cca94ac8f7f3f0c20e20050d4cce9d9730/init.go func errorFromErrorQueue() string { if initialized == false { initialized = true C.init_errors() } var errs []string for { err := C.ERR_get_error() if err == 0 { break } errs = append(errs, fmt.Sprintf("%s:%s:%s", C.GoString(C.ERR_lib_error_string(err)), C.GoString(C.ERR_func_error_string(err)), C.GoString(C.ERR_reason_error_string(err)))) } return fmt.Sprintf("SSL errors: %s", strings.Join(errs, "\n")) } func pfx_to_pem(pfx []byte) (key_buf []byte, crt_buf []byte, err error) { var key *C.void var crt *C.void rc := C.pfx_to_pem(unsafe.Pointer(&pfx[0]), C.long(len(pfx)), nil, (*unsafe.Pointer)(unsafe.Pointer(&key)), (*unsafe.Pointer)(unsafe.Pointer(&crt))) if rc != nil { err = errors.New(C.GoString(rc) + "\n" + errorFromErrorQueue()) return } defer C.free_pem(unsafe.Pointer(key)) defer C.free_pem(unsafe.Pointer(crt)) size := C.get_pem_size(unsafe.Pointer(key)) key_buf = make([]byte, int(size)) C.copy_pem_to(unsafe.Pointer(key), unsafe.Pointer(&key_buf[0]), size) size = C.get_pem_size(unsafe.Pointer(key)) crt_buf = make([]byte, int(size)) C.copy_pem_to(unsafe.Pointer(crt), unsafe.Pointer(&crt_buf[0]), size) return } view raw pfx.go hosted with ❤ by GitHub As you can see, most of the code is there to manage error handling. But you can now convert a PFX to PEM and then pass that to X509keyPair easily.That said, this seems just utterly ridiculous to me. There has got to be a better way to do that, surely.
by Ardalis
posted on: August 24, 2021
In distributed software applications, different services or processes or apps frequently need to communicate with one another. Modern…Keep Reading →
by Andrew Lock
posted on: August 24, 2021
Using strongly-typed entity IDs to avoid primitive obsession - Part 7
by Gérald Barré
posted on: August 23, 2021
Reproducible builds are important when building NuGet packages from public sources. Indeed, it gives your consumers confidence in your packages by allowing them to validate the package has actually been built using the public sources. To be able to reproduce a build, you need the source files, the
by Oren Eini
posted on: August 20, 2021
I needed to pay one of our suppliers. That supplier happens to be living in Europe, while Hibernating Rhinos is headquartered in Israel. That means that I have to send an international money transfer to get them paid.So far, that isn’t an issue, this is literally something that we have to do multiple times a week and have been doing for the past decade and a half. This time, however…We run into a problem, we initiated the payment round as usual and let the suppliers know that the money was transferred. And then I forgot about it, anything from this point on is on the accounting department. A few days later, we started getting calls, telling us that the money that we sent didn’t arrive. I called the bank and they checked, it appears that some of the transfers that we made hit an internal bank limit. Basically, there are rules in place for how much money you can send out of the country before you have to get the IRS involved. I believe that the issue is about moving money to off shore accounts in order to provide taxes on that, but that isn’t relevant for the story at hand. What is relevant here is that the bank didn’t process some of the payments in the run (those that hit the limit for the tax block). Some payment did went through, but some didn’t. The issue with such things isn’t so much the block (I can understand why it is there, although I wish there was some earlier notice). The major issue was that we try to have a good payment schedule for our suppliers, meaning that we’ll pay most invoices within a short amount of time. When something like that happens, it means that we have to wade into the bureaucracy of the (international) tax system. That takes time, and in that time, the suppliers aren’t getting paid. Technically, we are okay, we are usually paying far ahead of the invoice due date, but I strongly dislike this.We used a different source for the funds and paid all the suppliers immediately, then set to clear the tax hurdles involved in the usual manner in which we are paying. I also paid to expediate the transfer and they all had the money arrive faster than normal. In all, I would estimate that this meant that we had a delay of just a few days over when the money would arrive normally. But that isn’t where the story ends. For most of the suppliers, the original transfers never happened, because of the tax issue. For two of them, however, the money was gone from our account. One of them also confirmed that they received the money, so I expected the second one to get it as well.They didn’t. But the money was gone from our account. Talking to the bank, the money was in the bank’s account, waiting for the tax permit to go ahead with that. I asked the bank to cancel the order, then I transferred the money using the alternative way.Except… that supplier called me, confused. The money appeared in their account twice. Checking with the bank, it was indeed gone from the original and alternative accounts. Well, that wasn’t expected. Luckily, this is a supplier that I’m doing regular business with, we decided that the simplest option is just to consider the extra payment to be credit for future charges. The supplier sent me a(n already marked as) paid invoice, and aside from shaking my head in disbelief, the issue was done.Except.. that supplier called me, even more confused. Their bank called them, saying that the originating bank has cancelled the transfer, and they need to send the money back. Que: The key here was that their bank wanted them to transfer the money back to us. I had a very negative reaction to that, because this pinged all the hallmarks for a common scam: Overpayment Scam. I asked the supplier to do nothing with that, since if the bank need to transfer the money, they can do it directly.The fear was that the supplier would send the money back, then the bank will refund the money, resulting in the supplier having no money. I mentioned already: ?I talked to my bank and cancelled the cancellation, so hopefully things will stabilize now. This is an ongoing event and I don’t know if we hit peak kafkianess. As for what happens, I suspect that when I asked to cancel the original transfer, the was another logic branch that was followed. Since the money already left my account, they had to record that as a cancellation, but that apparently was sent to the destination bank, along with the money, I guess? At this point, I don’t even wanna know.
by Oren Eini
posted on: August 18, 2021
I wrote a post a couple of weeks ago called: Architecture foresight: Put a queue on that. I got an interesting comment from Mike Tomaras on the post that deserve its own post in reply.Even though the benefits of an async queue are indisputable, I will respectfully point out that you brush over or ignore the drawbacks. … redacted, see the real comment for details …I think we agree that your sync code example is much easier to reason about than your async one. "Well, it is a bit more complex to manage in the user interface", "And you can play games on the front end" hides a lot of complexity in the FE to accommodate async patterns. Your "At more advanced levels" section presents no benefits really, doing these things in a sync pattern is exactly the same as in async, the complexity is moved to the infrastructure instead of the code.This is a great discussion, and I agree with Mike that there are additional costs to using the async option compared to the synchronous one. There is a really good reason why pretty much all modern languages has something similar to async/await, after all. And anyone who did any work with Node.js and promises without that knows exactly what are the cost of trying to keep the state of the system through multiple levels of callbacks.It is important, however, that my recommendation had nothing to do with async directly, although that is the end result. My recommendation had a lot more to do with breaking apart the behavior of the system, so you aren’t expected to give immediate replies to the user. Consider this: ⏱. When you are processing a user’s request, you have a timer inherent to the operation. That timer can be a real one (how long until the request times out) or it can be a mental one (how long until the user gets bored). That means that you have a very short SLA to run the actual request.What is the impact of that on your system? You have to provision enough capacity in the system to handle the spikes within the small SLA that you have to work with. That is tough. Let’s assume that you are running a website that accepts comments, and you need to run spam detection on the comment before actually posting that. This seems like a pretty standard scenario, right? It doesn’t require specialized scenarios.However, the service you use has a rate limit of 10 comments / sec. That is also something that is pretty common and reasonable. How would you handle something like that if you have a post that suddenly gets a lot of comments? Well, you’ll have something that ensure that you don’t pass the limit, but then the user is sitting there, waiting and thinking that the request timed out. On the other hand, if you accept the request and place it into a queue, you can show it in the UI as accepted immediately and then process that at leisure. Yes, this is more complex than just making the call inline, it requires a higher degree of complexity, but it also ensure that you have proper separation in your system. The front end submit messages to the backend, which will reply when it is done. By having this separation upfront, as part of your overall design, you get options. You can change how you are processing things in the backend quickly. Your front end feel fast (which is usually much more important than being fast, mind you).As for the rate limits and the SLA? In the case of spam API or similar services, sure, this is obvious. But there are usually a lot of implicit SLAs like that. Your database disk is only able to serve so many writes a second, for example. That isn’t usually surfaced to you as X writes / sec limit, but it is true nevertheless. And a queue will smooth over any such issues easily. With making the request directly, you have to ensure that you have enough capacity to handle spikes, and that is usually far more expensive.What is more interesting, in my opinion, is that the queue gives you options that you wouldn’t have otherwise. For example, tracing of all operations (great for audits), retries if needed, easy model for scale out, smoothing out of spikes, etc. You cannot actually put everything into a queue, of course. The typical example is that you’ll want to handle a login page. You cannot really “let the user login immediately and process in the background”. Another example where you don’t want to use asynchronous processing is when you are making a query. There are patterns for async query completions, but they are pretty horrible to work with. In general, the idea is that whenever the is any operation in the system, you throw that to a queue. Reads and certain key aspects are things that you’ll need to run directly.