Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Today we’re happy to announce we’ve made some big changes to our test servers which mean a range of small but noticeable improvements to both our free customers and our paid users, and to take advantage of this – improved charting.
We’ve rolled out an additional 10 servers (spread geographically) in the last week and these servers are already up and running. With these additional servers we now have over 80 running in operation querying sites at a rate of around 25,000 per a minute. This means each server is doing less work now and thus continuing to ensure no delays in testing
Previous to today we’ve always run free and paid tests in batches of around 250, each server testing 250 tests at any point of time, and each contending for the available bandwidth. We’ve now changed this so that each server is only running 20 tests at a time (thanks to the additional servers) and with a dedicated port speed of 250KB. This means you can better detect when performance is peeking and dropping as the charts will better reflect the reality of the load time.
Our performance charts always worked quite well but as we’ve started to get more and more data some users started to notice issues and or slow performance. This is because we used to store the data on performance in a MySQL table (a relic of the first version of StatusCake), it worked well when we had a few hundred tests running but now with over 250 million test results being added each week it was clear we needed to move data over; and that is exactly what we’ve done. Our performance data is now stored on a NoSQL (faster) database format, this means you will notice a lot less issues and a lot more speed! We will also be introducing exporting functionality now that we can query the data in a much faster way.
As always stay tuned for more updates coming your way!
Share this
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
Find out everything you need to know in our new uptime monitoring whitepaper 2021