StatusCake

Page Speed monitoring feature

Over the past few weeks we’ve been looking at feature requests on our Page Speed monitoring feature, and today we’ve released the next phase which introduces a wide range of improvement and new functionality.

Tracking Options

One of the most frequent requests we’ve had is giving our customers the ability to ensure that Page Speed testing does not affect users’ analytics stats.

To address this we’ve added both the ability to not load code from common trackers like Google analytics by default and also a function which will include a DNT (Do Not Track) header with all outgoing requests, meaning users will no longer need to add custom filters within their analytics accounts in order to exclude this traffic.

Data Connection Simulation/Throttling

Another big request was that users would like to be able to define the connection type that’s used for the Page Speed monitoring. We now allow you to select one of the following options in-app for your Page Speed tests, and the normal speeds are still available as the default option:

  • 4G
  • 3G FAST (AKA 3.5G)
  • 3G SLOW
  • EDGE(AKA 2G)
  • GPRS

Custom Viewport Sizes

Next, we’ve added the ability to set the screen resolution and view type for the Page Speed test, this allows you to simulate tests from different devices/screen sizes. Both desktop and mobile variants are offered, and you can see the current options below:
article1

Custom User Agent

You can now set a custom user agent which can be useful for accessing pages with certain restrictions, and also if for any other reason you’d prefer not to use our default user agent: StatusCake_Pagespeed_Indev

Custom Headers

Our final update to the Page Speed monitoring feature is the ability to send custom headers with the test, again this will enable testing on sites where this was not previously possible.

If you’d like to learn more about Page Speed testing check out our Knowledgebase here.

Share this

More from StatusCake

Engineering

Beyond Uptime: Building a Self-Healing OpenClaw Observability Stack

3 min read The allure of OpenClaw is undeniable. You deploy a highly autonomous, self-hosted AI agent, give it access to your repositories and inboxes, and watch it reason through complex workflows while you sleep. It is the dream of the ultimate 10x developer tool realized. But as any veteran DevOps engineer will tell you: running an LLM-backed

When AWS us-east-1 Fails, Much of the Internet Fails With It

7 min read There are cloud outages, and then there are us-east-1 outages. That distinction matters because failures in AWS’s Northern Virginia region rarely feel like ordinary regional incidents. They tend instead to expose something larger and more uncomfortable: too much of the modern internet still behaves as though one place is an acceptable concentration point for infrastructure,

In the Age of AI, Operational Memory Matters Most During Incidents

7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to

AI Didn’t Kill the SDLC. It Made It Harder to See

10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about

When Code Becomes Cheap: The New Reliability Constraint in Software Engineering

4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,

Buy vs Build in the Age of AI (Part 3)

5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.