Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Today we’ve made some changes to our Domain and SSL features to make your monitoring set ups more effective, and cover a wider range of use-cases.
Hostname field added to SSL. You can now add a custom hostname to the settings of your SSL test. This is very handy in cases where you need to test with a unique hostname or a specific IP, which is great for bypassing a proxy or dealing with the presence of a load balancer. Existing tests will need to be updated and this setting enabled in order to take advantage of this feature.
We’ve also added the ability to define a custom user agent for the SSL tests. You can set this to whatever you want for development or security purposes, if you do choose to leave this blank then our default StatusCake user agent will be used.
API support for this and the hostname function will be coming soon.
New TLD’s added to our list of supported Domains.
Adding one of these domains will allow you to get the full set of data including expiry and extended WHOis info. If there’s a domain you’d like to see supported that isn’t just yet – please get in touch with our friendly team via live chat, or send us an email at [email protected].
Share this
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
Find out everything you need to know in our new uptime monitoring whitepaper 2021