Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



With StatusCake you can use a variety of methods to test basic “transactions”, including forms that deal with login, data protection, and others.
First of all, you should assess which tools you need to use, and where the testing should be targeted. If you are dealing with a HTML based login form you should submit Form POST data, and your target should be the URL of that form rather than the main page URL.
If it’s a basic authentication job then your URL target should be that of the main page, and you should use the basic auth fields on the test on our end to gain access:
For other types of HTML form, which could be for a wide range of uses, you just need to grab the field submission names from the source code, these can again be entered in the Form POST field invalid JSON format with your desired values. This way you can use the feature to test pretty much any type of entry form.
Validating the Results
Once your form or login dialogue is being actioned, it’s time to set up validation of the process, this can be done in two ways.
String Match – Using the String Match field on the test you can confirm the presence of one or more strings in the source of the resulting page after whichever process has been carried out. You can be alerted optionally if these strings are found/not found.
Final Location – With this, you can verify that the final URL in the process is an expected URL, for example, if you are expecting http://mysite.com/allgood.php , but the URL reached is http://mysite.com/notgreat.php – you will receive an alert for the test.
Share this
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
Find out everything you need to know in our new uptime monitoring whitepaper 2021