StatusCake

Website Crashes Still a Major Issue in 2016

clock measuring uptime

Well, 2016 has been another banner year for major website crashes. Distributed denial of service (DDoS) attacks, such as the hugely disruptive one on Dyn, were on the rise, but many of these crashes were preventable. As in past years, the main culprit was inadequate preparation for an increase in traffic that was entirely predictable.

It is understandable that many new small businesses would not plan for traffic surges. However, just as in past years, large businesses and governmental agencies launched major campaigns that are guaranteed to generate huge increases in traffic and then found themselves the target of unrelenting negative publicity in the press and on social media when that increased traffic crashed their websites. As the late American baseball player Yogi Berra would say: “It’s déjà vu all over again.” Here are a just a few of the crashes that generated a lot of frustration.

Black Friday

Every major retailer knows that traffic on Black Friday will be off the charts as consumers chase the bargains that retailers had been previewing during the previous two weeks, so you would think they would be prepared. Unfortunately, some retailers were not. On Black Friday, the websites of the following companies crashed and/or experienced a severe degradation in service: Currys, PC World, Macy’s, Quidco (a cashback site), and GAME.

The negative reaction on social media was overwhelming. Frustrated customers vowed never again to shop at some of these retailers, and some helpfully posted links to competitors’ websites that were holding up just fine under the Black Friday onslaught.

Their poor performance on Black Friday hurt these firms in two ways. They lost an undetermined amount of sales to their competitors on the day because customers were unable to complete their purchases. Their reputation also suffered, causing them to lose future sales as well, Black Friday certainly was “black” for those firms, but for the wrong reasons.

Cabinet Office website crash

Leading up to the voting on the EU referendum in June, voters were encouraged to register online. Traffic on gov.uk steadily increased during the week before the registration deadline and traffic eventually reached over 200,000 users per hour. The site could not handle the traffic spike and crashed on the registration deadline date, leaving many people unable to become eligible to vote.

Frustrated voters took to social media to vent their anger and demand an extension of the voting registration deadline. After the furor, the government agreed to extend the registration deadline by two days.

As you can see, when a website goes down, users of the site can get very frustrated. Websites can go down for many reasons, not just because of a traffic spike. Sometimes technical issues at your hosting company over which you have no control can be the cause. In any event, you need to know promptly if your site is down so you can take corrective action and keep customer dissatisfaction to a minimum; consider using a website monitoring service to get alerted fast.

Share this

More from StatusCake

In the Age of AI, Operational Memory Matters Most During Incidents

7 min read Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to

AI Didn’t Kill the SDLC. It Made It Harder to See

10 min read Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it. There is a now-familiar story about

When Code Becomes Cheap: The New Reliability Constraint in Software Engineering

4 min read How AI Is Shifting Software Engineering’s Primary Constraint For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size,

Buy vs Build in the Age of AI (Part 3)

5 min read Autonomous Code, Trust Boundaries, and Why Governance Now Matters More Than Ever In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t

Buy vs Build in the Age of AI (Part 2)

6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff

Buy vs Build in the Age of AI (Part 1)

5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.