Stephen Davis’ Post

View profile for Stephen Davis, graphic

Helping test professionals get best-of-breed software testing tools that increase productivity, consistency & coverage

People aren’t as patient as they once were. Back in the day, waiting 10 seconds for a website to load wasn’t particularly unusual. Nowadays, though, anything less than an instant response may push users—and I include both customers and employees in that category—to rage and quit your solution. So, applications must handle whatever users throw, especially peak loads. After all, by definition, this is the point at which your system is exposed to the most users, and you have the most to lose. Often, when non-techies think about peak periods, it’s with a retail focus. However, as we all know, it’s not just ticket sellers or Black Friday retailers who need to worry about usage spikes. - Operational surges (e.g. submitting timesheets or cashing up) - High-volume periods (e.g. university results or clearing processes) - Simultaneous login events (e.g. start of the day or after service interruptions) In fact, most businesses experience periods when their solutions experience higher-than-average volumes, and a solution failure or even a slowdown at this time is bad news for sales, customer experience or employee productivity. Obviously, this isn’t exactly breaking news; every business knows when its peak volumes are coming and roughly what its systems need to handle. So, what can they do to help scalability? A common approach is to throw more tin (or cloud) at your solution. The high-level process goes something like this… do some math, factor in some scalability, beef up your system where required, and keep your fingers crossed. And you know what? This can work. But it leaves a hell of a lot to chance, and no amount of computing power can remedy defective code. It may have worked before, but that does not mean it will work every time because you can bet that something will have changed, and the tiniest something might prove to be the limiting factor. It’s a bit like playing Russian roulette. You might get away with it a few times, but eventually, something will go bang, and it won’t end well. Plus, I have seen applications degrade significantly at 40% CPU utilisation, as CPU is not the only hardware constraint. Also, how do you know what will happen when extra hardware is thrown at the problem without testing? To handle the extra load, do you need the high-performance engine from an Aston Martin to drive the application faster or the brute force of a JCB? Read this week's insight to find out more – you can find a link to the full article in the comments....

Stephen Davis

Helping test professionals get best-of-breed software testing tools that increase productivity, consistency & coverage

2mo

To view or add a comment, sign in

Explore topics