StatusCake

Avoiding a Website Crash – Virgin Money Giving Runs into Trouble

statuscake

Virgin Money Giving experienced a website crash during the London Marathon, and that crash was both embarrassing and costly. In the short term, the crash prevented people from providing support to the marathon participants promptly. In the long term, Virgin has taken a hit in brand reputation that may take a while to recover from.

Of course, Virgin is not the only organization to fall victim to website crashes or slowdown. During Black Friday last year, many large online retailers suffered the same fate. Even a degradation in site loading time can have detrimental effects as serious as a website crash. Customers will abandon a site that is slow to load and take their business elsewhere, and search engines will downgrade the ranking of sites that have a track record of frequent crashes or slow loading time.

You need to be proactive to keep your site up and running. Here are four steps that you should take:

1.  Plan for the worst-case scenario

Most businesses know when they will experience peak traffic based on previous experience. If you are on online retailer, you know what volume you experienced on previous peak days such as Black Friday, and this should be your starting point for planning for how much traffic your site should be capable of handling to allow for a major spike in traffic.

2.  Identify potential bottlenecks

Once you determine the peak traffic flow that you wish to accommodate, identify any bottlenecks on your website that might prevent you from handling it. Then, load test each to see if any of them fail, and make appropriate changes to eliminate those bottlenecks. Be sure to do this well in advance of when you expect your peak traffic to hit.

 3.  Conduct a final check

After evaluating the individual potential bottlenecks, conduct a complete load and stress test on your site and apps using the maximum anticipated amount of traffic plus an additional amount of traffic to give you a margin of safety. A complete professional load test will simulate peak traffic amounts easily and quickly and will show you exactly what failed if your site does not pass the test. Once your site passes the final check, you can be confident that your site is ready.

 4.  Have a backup plan

Sometimes, circumstances beyond your control can thwart even the most comprehensive plan, and your site will still crash. Therefore, it’s best to have a plan to help mitigate the damage if your site does go down. Consider using a website monitoring service so that you will know promptly if your site does crash. Prepare a communications plan so that you can inform your visitors and customers why your site went down, what steps you are taking to get the site back online, and how long you expect it will take for you to resume normal operations.

When your website goes down, it’s the equivalent of a brick-and-mortar store locking its front door. Taking steps to keep your website up and running during peak traffic flows is crucial in maintaining your reputation and keeping your customers from going elsewhere.

Share this

More from StatusCake

In the Age of AI, Operational Memory Matters Most During Incidents

7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to

AI Didn’t Kill the SDLC. It Made It Harder to See

10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about

When Code Becomes Cheap: The New Reliability Constraint in Software Engineering

4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,

Buy vs Build in the Age of AI (Part 3)

5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t

Buy vs Build in the Age of AI (Part 2)

6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff

Buy vs Build in the Age of AI (Part 1)

5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.