Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



At StatusCake a large part of what we do is reacting to current downtime with alerts and logging historical downtime within reports. Today we’d like to take a look at how users can get the most out of customizing these factors.
On our Business level plan all alerts and reports via email can be fully customized to better represent your brand and ensure that the right information needed for your team to address issues quickly and efficiently is always present.
In the image below you can get an idea of the extent to which this is possible, all of our emails the get sent out can be modified as you see fit:
You can modify your alerts for Virus, Uptime, SSL, Domain, Page Speed, and Server type notifications in a range of different ways. Each template is managed individually and will apply to all notifications for the set type – you can also set test-specific changes up if there are tests that you’d rather not use the blanket default settings. For changes to all tests, you should go to the User Details section for setup, and for changes on an individual test level it’s just a case of editing the test in question and working with the three fields shown below:
When editing the email’s in the User Details section you’ll be able to change colors, logos, and even the base HTML/CSS of the email. We’ve included a range of tags which can be included in the code which will grab data which can make these emails more useful and tailored to your team.
| Tag | Usage |
|---|---|
| ||TITLE|| | Displays Test Name |
| ||SITE|| | Displays Website URL |
| ||TYPE|| | Displays Test Type (HTTP/PING/DNS etc) |
| ||QUOTE|| | TestID and alert number e.g ( 12345 – 1) |
| ||REASON|| | Display Cause of Downtime |
| ||TIME|| | Display total downtime length for test |
| ||HTTPCODE|| | Display the error Status Code |
| ||TESTID|| | Display the TestID on it’s own |
| ||CHECKRATE|| | Display how often the test is checked. |
| ||HOST|| | Display Hosting provider if present |
| ||CONFIRMEDTOTAL|| | Show Number of confirmations for downtime |
| ||MESSAGE|| | Display custom content set per test |
| ||TAGS|| | Include tags attributed to the test |
| ||VALID_FROM|| | Display the date from which the SSL certificate is valid (SSL only) |
| ||VALID_UNTIL|| | Display the date from which the SSL certificate is no longer valid (SSL only) |
So for example, you could populate the subject field of the alert email with these tags, something like “Your Site: ||TITLE|| Is Currently Down 533 ||REASON||”
Reports will work slightly differently in that they are showing a historic downtime record for one or more tests, we can fully edit these as well in terms of appearance, title, sender address and tags, and changes will apply both to automatically and manually generated reports.
We use a different set of tags here which can be seen in the table below, bear in mind that these are report specific and will not work for alert type notifications.
| Tag | Usage |
|---|---|
| ||CU|| | Display Total Uptime percentage for tests in report |
| ||WD|| | Display Number of Tests with downtime |
| ||WOD|| | Display Number of Tests with no downtime |
| ||TABLE|| | Display Table of Tests with downtime |
| ||UPTABLE|| | Display Table of Tests with no downtime |
If you’ve got questions on this or would like to know more, feel free to get in touch with our friendly support team who will gladly answer any questions you have!
Share this
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
4 min read Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes

3 min read IPFS is a game-changer for decentralised storage and the future of the web, but it still requires active monitoring to ensure everything runs smoothly.
Find out everything you need to know in our new uptime monitoring whitepaper 2021