Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



At StatusCake we’ve got a powerful and ever-growing API which allows you to automate many tasks, and also to take advantage of functions that are not available in-app.
You can communicate with our API through most common coding languages, through this article we will provide code examples in terminal bash, PHP, and Python, as well as an example of how to manually run the commands through a common tool for such tasks: PostMan.
The first thing you need to bear in mind when using the API is that all of your calls will need to be validated using the Username and API key, both of these details can be found in the user details section of your account.
For the full list of settings that can be added along with your new test please check this page.
When using a the PHP curl method you’ll want something very similar to the code shown below, just replace the values with your desired parameters. You can see a full list of the available parameters here.
[blockquote align=”left” reverse=”off”]
// Enter your personal API key and Username to use for authentication , and also the data to insert
$API = “l6OxVJilcD2cETMoNRvn”;
$Username = “StatusCake”;
$InsertData = array(“WebsiteName”:”My new test”, “Paused” : 0, “WebsiteURL”: “https://www.statuscake.com”, “CheckRate” : “60”, “TestType” : “HTTP”);
// Create the CURL, set the options, remember to use the PUT request type and add the Username and API values to an array.
$ch = curl_init(“https://app.statuscake.com/API/Tests/Update”);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, “PUT”);
curl_setopt($ch, CURLOPT_POSTFIELDS,http_build_query($InsertData));
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
“API: “.$API,
“Username: “.$Username
));
// For Debugging, find out if something is going wrong
$Response = curl_exec($ch);
$Response = json_decode($Response);
// Check to see if everything has worked, and print a message indicating whether this was the case
if ($Response->Success == 1) {
echo ‘Inserted Test!’;
} else {
echo ‘Something Went Wrong<BR>’;
echo $Response->Message;
}
}
[/blockquote]
With this method we are looking at significantly less typing, it’s very important once again to ensure that this is always sent as a PUT request. The command should be run from a linux based system accessed through SSH or something similar.
[blockquote align=”left” reverse=”off”]
curl -H “API: APIKEY” -H “Username: USERNAME” -d “WebsiteName=MyNewSite&CheckRate=60&TestType=HTTP” -X PUT https://app.statuscake.com/API/Tests/Update
[/blockquote]
To send a the HTTP PUT request required for this functionality with Python we need to import and use the “requests” package.
[blockquote align=”left” reverse=”off”]
import requests
payload = {‘WebsiteName’:’MySite’, ‘TestType’:’HTTP’, ‘CheckRate’:60, ‘WebsiteURL’:’https://www.statuscake.com’}
headers = {‘API’: ‘KEYHERE’, ‘Username’: ‘USERNAMEHERE’}
url = ‘https://app.statuscake.com/API/Tests/Update’
r = requests.put(url, headers=headers, data=payload)
print(r.text)
[/blockquote]
Using Postman can be a good manual method, and a way of testing your settings before automating them. Below we’ve included an image showing an example of how to enter the details into the Postman software, API key and Username should be entered separately in the “Headers” section.
Share this
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
Find out everything you need to know in our new uptime monitoring whitepaper 2021