An aphorism I’ve been feeling more every day is “slow is smooth, and smooth is fast,” and I found this post about the beginning of FoundationDB’s engineering journey.

Before we even started writing the database, we first wrote a fully-deterministic event-based network simulation that our database could plug into. This system let us simulate an entire cluster of interacting database processes […] driven by the same random number generator. We could run this virtual cluster, inject network faults, kill machines, simulate whatever crazy behavior we wanted, and see how it reacted.

Best of all, if one particular simulation run found a bug in our application logic, we could run it over and over again with the same random seed, and the exact same series of events would happen in the exact same order. That meant that even for the weirdest and rarest bugs, we got infinity tries at figuring it out

Is Something Bugging You? emphasis mine

It is harder to debug a piece of code than write it, thus writing code at the edge of your ability will leave you up a creek when debugging. By writing a testing framework first that could reproduce bugs and give them infinite attempts against a reproducer, they cut out one source of the drag experienced as teams grow. Teams specialize as they grow, and certain engineers gain better intuition for certain parts of the system over time.

The testing framework is a way to make sure that the team’s intuition is not the only thing that can find bugs.