StatusCake

Page Speed monitoring feature

Over the past few weeks we’ve been looking at feature requests on our Page Speed monitoring feature, and today we’ve released the next phase which introduces a wide range of improvement and new functionality.

Tracking Options

One of the most frequent requests we’ve had is giving our customers the ability to ensure that Page Speed testing does not affect users’ analytics stats.

To address this we’ve added both the ability to not load code from common trackers like Google analytics by default and also a function which will include a DNT (Do Not Track) header with all outgoing requests, meaning users will no longer need to add custom filters within their analytics accounts in order to exclude this traffic.

Data Connection Simulation/Throttling

Another big request was that users would like to be able to define the connection type that’s used for the Page Speed monitoring. We now allow you to select one of the following options in-app for your Page Speed tests, and the normal speeds are still available as the default option:

  • 4G
  • 3G FAST (AKA 3.5G)
  • 3G SLOW
  • EDGE(AKA 2G)
  • GPRS

Custom Viewport Sizes

Next, we’ve added the ability to set the screen resolution and view type for the Page Speed test, this allows you to simulate tests from different devices/screen sizes. Both desktop and mobile variants are offered, and you can see the current options below:
article1

Custom User Agent

You can now set a custom user agent which can be useful for accessing pages with certain restrictions, and also if for any other reason you’d prefer not to use our default user agent: StatusCake_Pagespeed_Indev

Custom Headers

Our final update to the Page Speed monitoring feature is the ability to send custom headers with the test, again this will enable testing on sites where this was not previously possible.

If you’d like to learn more about Page Speed testing check out our Knowledgebase here.

Share this

More from StatusCake

Designing Alerts for Action

3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams

A Notification List Is Not a Team

3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years

Alert Noise Isn’t an Accident — It’s a Design Decision

3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.

The Incident Checklist: Reducing Cognitive Load When It Matters Most

4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already

When Things Go Wrong, Systems Should Help Humans — Not Fight Them

3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,

When AI Speeds Up Change, Knowing First Becomes the Constraint

5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.