September 3rd, 2019
There have been some wonderfully interconnected things about fast software lately.
We talk about a lot of performance on the web. We can make things a little faster here and there. We see rises in success metrics with rises in performance. I find those type of charts very satisfying. But perhaps even more interesting is to think about the individual people that speed affects. It can be the difference between I love this software and Screw this, I’m out.
Craig Mod, in “Fast Software, the Best Software”, totally bailed on Google Maps:
Google Maps has gotten so slow, that I did the unthinkable: I reinstalled Apple Maps on my iPhone. Apple Maps in contrast, today, is downright zippy and responsive. The data still isn’t as good as Google Maps, but this a good example of where slowness pushed me to reinstall an app I had all but written off. I’ll give Apple Maps more of a chance going forward.
And puts a point on it:
But why is slow bad? Fast software is not always good software, but slow software is rarely able to rise to greatness. Fast software gives the user a chance to “meld” with its toolset. That is, not break flow.
Sometimes it’s even life and death! Hillel Wayne, in “Performance Matters,” says emergency workers in an ambulance don’t use the built-in digital “Patient Care Report” (PCR) system, instead opting for paper and pencil, simply because the PCR is a little slow:
The ambulance I shadowed had an ePCR. Nobody used it. I talked to the EMTs about this, and they said nobody they knew used it either. Lack of training? «No, we all got trained.» Crippling bugs? No, it worked fine. Paper was good enough? No, the ePCR was much better than paper PCRs in almost every way. It just had one problem: it was too slow.
It wasn’t even that slow. Something like a quarter-second lag when you opened a dropdown or clicked a button. But it made things so unpleasant that nobody wanted to touch it. Paper was slow and annoying and easy to screw up, but at least it wasn’t that.
Monica Dinculescu created a Typing delay experiment that simulates this input delay. The “we’re done here” setting of 200ms is absolutely well-named. I’d never use software that felt like that. Jay Peters over on The Verge agreed, and anything higher is exponentially worse.
Extra interesting: random delay is worse than consistent large delays, which is probably a more likely scenario on our own sites.