I needed to bind an <input type=number> in a Blazor application. I want to get the value as soon as the user changes the value. There is no problem for integer values. However, this is not as simple for decimal values.#First attempt: <input type=number>First, I started with a basic solu
I ask candidates to answer the following question. Sometimes at home, sometimes during an interview with a whiteboard. You need to create an executable that would manage a phone book. The commands you need to support are:phone-book.exe /path/to/file add [name] [phone]phone-book.exe /path/to/file list [skip], [limit]The output of the list operation must be the phone book records in lexical order. You may not sort the data during the list operation, however. All such work must be done in the add operation.You may keep any state you’ll like in the file system, but there are separate invocations of the program for each step. This program need to support adding 10 million records. Feel free to constrain the problem in any other way that would make it easier for you to implement it. We’ll rate the solution on how much it cost in terms of I/O. A reminder, we are a database company, this sort of question is incredibly relevant to the things that we do daily.I give this question to candidates with no experience, fresh graduates, etc. How would you rate its difficulty?
A client I'm working with wanted a set of initial decisions and questions their team should address as they begin a new project. I did a bit…Keep Reading →
I’m talking a lot about candidates and the hiring process we go through right now. I thought it would only be fair to share a story about an interview task that I failed.That was close to 20 years ago, and I was looking for my first job. Absolutely no professional experience and painfully aware of that. I did have a few years of working on Open Source projects, so I was confident that I had a good way to show my abilities. The question was simple, write the code to turn the contents of this table into a hierarchical XML file: In other words, they wanted:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
<posts>
<post id="1" user-id="2" at="2001-03-04T09:30:21">
<subject>Hello</subject>
<body>How are you doing?</body>
<posts>
<post id="3" user-id="4" at="2001-03-04T10:22:01">
<subject>Hi</subject>
<body>I'm cool, and you?</body>
<posts>
<post id="4" user-id="2" at="2001-03-05T08:12:11">
<subject>Can't complain</subject>
<body>See you over the weekend?</body>
<posts></posts>
</post>
</posts>
</post>
</posts>
</post>
</posts>
view raw
data.xml
hosted with ❤ by GitHub
To answer the question, I was given pen and paper, by the way. That made my implementation choices quite hard, since I had to write it all in long hand. I tried to reproduce this from memory, and it looks like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
void WriteHierarchy(XmlWriter writer)
{
var nodes = new Dictionary<int?, XElement>{
[null] = new XElement("posts", new XElement("posts"))
};
foreach(var entry in Query("SELECT * FROM [Posts] ORDER BY ParentPostId"))
{
var element = new XElement("post",
new XAttribute("id", entry.Id),
new XAttribute("user-id", entry.UserId),
new XAttribute("at", entry.PostedAt),
new XElement("subject", entry.Subject),
new XElement("body", entry.Body),
new XElement("posts")
);
nodes[entry.Id] = element;
nodes[entry.ParentPostId].Element("posts").Add(element);
}
nodes[null].WriteTo(writer);)
}
view raw
xml.cs
hosted with ❤ by GitHub
This is notepad code, and I wrote it using modern API. At the time, I was using ADO.Net and the XmlDocument. The idea is the same, however, and it will spare you going through a mass of really uninteresting details.I got so many challenges to this answer,though. I relied on null being sorted first on SQL Server and then on the fact that a parent must exist before its children. Aside from these assumptions, which I feel are fairly safe to make, I couldn’t figure out what the big deal was.Eventually it turned out that the interviewers were trying to guide me toward a recursive solution. It never even occurred to me, since I was doing that with a single query and a recursive solution would need many such queries.
I recently got my hands on a the Raspberry PI 400 (the one that comes in a keyboard form). That is an amazing idea and it make the Raspberry a lot more approachable for consumer cases. At any rate, one of my first actions was to put RavenDB on it and see how well it performs. You can see the results in the image below. In this case, we are running 1,500 queries per second on the system. It has 4 GB of RAM and the database we are using has 450 GB (!) worth of data. I actually just took the nearest external disk I had available and plugged that into the PI. This is a generic hard disk and I can get a maximum of about 30 MB / sec from it.This is important because my queries are covering more data than can fit in memory. Each query asks for a random (different) document, so there is little chance for optimizations by having a hot working set. We are going to see some I/O to the (pretty poor) disk impacting the outcome. Here are the results:You can see that the for 95% of the queries, we got a result in under 125 milliseconds and that for 99% of the requests, RavenDB on a Raspberry PI is able to answer in about half a second. And even with some of the requests having to hit the disk, the maximum number of time to wait for a request is just above a second. All of that when we are facing 1,500 queries per second, which is respectable even for big applications running on much more massive hardware.Of particular interest to me is the state of the server when we are running this benchmark. You can see that both in terms of CPU utilization and in the number of queries processed, we are nearly absolutely flat. There aren’t any hiccups in the load, there haven’t been a GC pause that stopped the world and the system just runs at top speed for as long as we’ll let it. In this case, the benchmark lasted over 5 minutes, so more than enough time to run through all the usual suspects. Note also the number of documents involved here. We are looking at 882 million documents. And we are requesting close to half a million of them. I run the benchmark long enough to ensure that we will cover more documents than can be fit into memory, so we are seeing I/O work here (on a fairly poor disk, I might add, but that is what I had available at the moment).The actual size of disk is a bit of a cheat, I’m using documents compression here to pack the data more tightly. The actual data size, without using RavenDB data compression is around 750GB. That also helps a lot with the amount of I/O we have to deal with, but it increase the CPU consumption. Given the difference in relative costs, that is a task that is paying dividends in spades.I also decided to see what we can look at when we are running a query that touches just a small part of the documents. Instead of working through nearly half a million, I chose to run it on about 100,000 documents. That is small enough that it should mostly all fit in memory. It also represent a far more likely scenario, to be frank. And here we can see that we get all requests, under 1,500 queries per second on a Raspberry PI in under 150 ms, with the 99.999% (!!) percentile in about 50 milliseconds.And that makes me very happy, because it shows the result of all the work we put into optimizing RavenDB.
If you are building a web application using ASP.NET Core (MVC, Pages, Blazor), you may rely on npm to minify and bundle your css and js files. This means that to build your application, you need to run an npm script such as npm run build before executing dotnet build. That's not convenient! What yo
Following a phone screen, we typically ask candidates to complete some coding tasks. The idea is that we want to see their code and asking a candidate to program during an interview… does not go well. I had a candidate some years ago that was provided with a machine, IDE and internet connection and walked out after failing for 30 minutes to reverse a string. Given that his CV said that he has 8 years of experience, I consider myself very lucky.Back to the candidate that prompt this post. He sent us answers to the coding tasks. In Node.JS and C++. Okay, weird flex, but I can manage. I don’t actually care what language a candidate knows, especially for the junior positions. Given that we are hiring for junior positions, we’ll usually get solutions that bend the question restrictions. For example, they would do a linear scan of a file even when they were asked not to. For the most part, we can ignore those details and focus on what the candidate is showing us. Sometimes we ask them to fix a particular issue, but usually we’ll just get them to the interview and ask them about their code there. I like asking candidates about their code, because I presume that they spent some time thinking about it and can discuss the topic in some detail. At one memorable interview I had a candidate tell me: “I didn’t write this code, I have no idea what is going on here”. I had triple checked that this is indeed the code they sent and followed up by sending the candidate home, sans offer. We can usually talk with the candidate about what drove them to certain decisions, what impact a particular constraint would be on their code, etc.In this case, however, the code was bad enough that I called it. I sent the candidate a notification about the issues we found in their code, detailing the 20+ critical failures that we found in the space of a few minutes of looking at it. The breaking point for me was that the tasks did not actually work. In fact, they couldn’t work. I’m not sure if they compiled, I didn’t check, but they certain were never even eyeballed.For example, we asked the candidate to build a server that would translate messages to Morse code and cause the server speaker to beep in Morse code. Nothing particularly fancy, I think. But we got a particular implementation for that. For example, here is the relevant code that plays the Morse code:The Node.js version that I’m using doesn’t come with the relevant machine learning model to make that actually happen, I’m afraid. The real killer for me was this part:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Show hidden characters
function makeDict(translate)
{
translate = {
"-----":"0",
// refacted
"--..":"Z"
};
}
function doWork(msg)
{
let translate;
makeDict(translate);
let mosre = translateToMorseCode(msg, translate);
// other stuff
}
view raw
morse.js
hosted with ❤ by GitHub
You might want to read this code a few times. They pass a variable to a function, set it to a new value and expect to see that new value outside. Basically, they wanted to use an out parameter here, which isn’t valid in JavaScript. That is the kind of fairly fundamental issue in understanding the flow of code in a program. And that is something that would never have worked. I’m okay with getting sub optimal solutions, I’m not fine with it never have been actually looked at.
Windows Subsystem for Linux (WSL) allows running one or multiple Linux distributions on Windows. Like any Operating System, you must install security updates on it. Instead of doing it manually, let's automate it using the Windows Scheduler!Open PowerShell and run the following commands to create a
I recently got an email from a customer. It was a very strange interaction. The email basically said:I wanted to let you know that I recently had to setup a new server for an existing application of mine. I had to find an old version of RavenDB and I was able to get it from the site.This is the first time in quite some time (years) that I had to touch this. I thought you would want to know that.I do want to know that. We spend an inordinate amount of time trying to make sure that Things Work. The problem with that approach is that if we do things properly, you won’t even know that there is a challenge here that we overcome.Our usual interaction with users is when they run into some kind of a problem. Hearing about the quite mode, where RavenDB just worked and no one paid attention to it in a few years is a breath of fresh air for me and the team in general.
We use cookies to analyze our website traffic and provide a better browsing experience. By
continuing to use our site, you agree to our use of cookies.