Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



StatusCake is unique in it’s offering of New Zealand Website monitoring. We have test locations around the globe and one of these locations is Auckland. You can select Auckland on any of our paid plans which start from as little as $24.99 – that’s around 37 NZD.
If you are a New Zealand based business does it really matter to you that your website is up in Netherlands if it’s down in your home country? There are hundreds of reasons why your site might be accessible from elsewhere on the globe but not to your next door neighbour – this includes DNS issues, poor cable routing and so on.
Even if your site isn’t down you’re not going to be able understand how well it’s performing if your relying on website monitoring from thousands of miles away. When you use our Auckland test location you can get an accurate representation of how quickly your website loads for other New Zealanders.
The Auckland datacentre we use for testing is designed and engineered to the highest standards. We are located in the Vocus datacentre meaning you can be certain of a quality testing service.
For 37 NZD a month you can rest assured that you know about downtime before your users. This means you are able to respond much quicker to any issues reducing the amount of time you are down for. If the downtime is outside of your control it still means you are able to take to social networks to inform your customers you are aware of the issue – thus ensuring a professional service. We’d take a bet your company’s reputation is worth more than $37 NZD!
Share this
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
Find out everything you need to know in our new uptime monitoring whitepaper 2021