weinzierl 4 hours ago

Is your code really fast if you haven't measured it properly? I'd say measuring is hard but a prerequisite for writing fast code, so truly fast code is harder.

The number one mistake I see people make is measuring one time and taking the results at face value. If you do nothing else, measure three times and you will at least have a feeling for the variability of your data. If you want to compare two versions of your code with confidence there is usually no way around proper statistical analysis.

Which brings me to the second mistake. When measuring runtime, taking the mean is not a good idea. Runtime measurements usually skew heavily towards a theoretical minimum which is a hard lower bound. The distribution is heavily lopsided with a long tail. If your objective is to compare two versions of some code, the minimum is a much better measure than the mean.

  • bostik 22 minutes ago

    > The distribution is heavily lopsided with a long tail.

    You'll see this in any properly active online system. Back in the previous job we had to drill it to teams that mean() was never an acceptable latency measurement. For that reason the telemetry agent we used provided out-of-the-box p50 (median), p90, p95, p99 and max values for every timer measurement window.

    The difference between p99 and max was an incredibly useful indicator of poor tail latency cases. After all, every one of those max figures was an occurrence of someone or something experiencing the long wait.

    These days, if I had the pleasure of dealing with systems where individual nodes handled thousands of messages per second, I'd add p999 to the mix.

  • Leszek 3 hours ago

    Fast code isn't a quantum effect, it doesn't wait for a measurement to wave collapse into being fast. The _assertion_ that a certain piece of code is fast probably requires a measurement (maybe you can get away with reasoning, e.g. algorithmic complexity or counting instructions; each have their flaws but so does measurement).

tombert 7 hours ago

I remember in 2017, I was trying to benchmark some highly concurrent code in F# using the async monad.

I was using timers, and I was getting insanely different times for the same code, going anywhere from 0ms to 20ms without any obvious changes to the environment or anything.

I was banging my head against it for hours, until I realized that async code is weird. Async code isn’t directly “run”, it’s “scheduled” and the calling thread can yield until we get the result. By trying to do microbenchmarks, I wasn’t really testing “my code”, I was testing the .NET scheduler.

It was my first glimpse into seeing why benchmarking is deceptively hard. I think about it all the time whenever I have to write performance tests.

am17an 6 hours ago

Typically you want to measure both things - time it takes to send an order and time it takes to calculate the decision to send an order. Both are important choke points, one for latency and the other for throughput (in case of busy markets, you can spend a lot of time deciding to send an order, creating backpressure)

The other thing is that L1/L2 switches provide this functionality, of taking switch timestamps and marking them, which is the true test of e2e latency without any clock drift etc.

Also, fast code is actually really really hard, you just to create the right test harness once

  • auc 6 hours ago

    Yeah definitely. Don’t want to have an algo that makes money when times are slow but then blows up/does nothing when market volume is 10x

Attummm 27 minutes ago

The title is clickbait, unfortunately.

The article states the opposite.

> Writing fast algorithmic trading system code is hard. Measuring it properly is even harder.

omgtehlion an hour ago

In HFT context (as in the article) measurement is quite easy: you tap incoming and outgoing network fibers and measure this time. Also you can do this in production, as this kind of measurement does not impact latency at all

nine_k 7 hours ago

Fast code is easy. But slow code is equally easy, unless you keep an eye, and measure.

And measuring is hard. This us why consistently fast code is hard.

In any case, adding some crude performance testing into your CI/CD suite, and signaling a problem if a test ran for much longer than it used to, is very helpful at quickly detecting bad performance regressions.

  • mattigames 6 hours ago

    Exactly, another instance where perfect can be the enemy of good, many times you are better out deploying something to prod, have some fairly good logging system and whenever you see an spike in slowness you try to replicate the conditions that made it slow, and debug from there, instead of expecting to have the impossible perfect measuring system that can detect even missing atoms of networking cables.

    • auc 6 hours ago

      Agreed, not worth making a huge effort toward an advanced system for measuring an ATS until you’ve really built out at scale

iammabd 5 hours ago

Yeah, Most people write for the happy path... Few obsess over the runtime behavior under stress.