Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



The contentious US presidential election is over and contrary to the expectations of most people, Donald Trump won. Canada was among the first to experience the effects of this decision. About two hours after polls closed in eastern US states and it became apparent that Trump would win, Canada’s main immigration website crashed.
There had been many tongue-in-cheek articles (and a few serious ones) in US newspapers and online discussing how large numbers of Americans would move to Canada if Trump won the election. Also, a Canadian radio host impishly launched a website in February pitching Cape Breton Island, located off Canada’s east coast, as an ideal location for American refugees to make their new home. As it turned out, there was quite a bit of interest in immigration on election night.
Traffic on the Citizenship and Immigration Canada website surged from a typical 17,000 visitors to over 200,000 visitors on election night at the time of the crash, with about 50% of the traffic coming from the US. After the crash, visitors to the site saw a message stating, “there is a problem with the resource you are looking for, and it cannot be displayed” and they could not access any content.
Canada’s immigration website was not the only one experiencing an increase in traffic on election night. The Telegraph reported that Google experienced a significant increase in searches for terms like “how to emigrate to Canada” and “emigrate,” and the BBC reported that Immigration New Zealand (INZ) experienced a 2,500% increase in traffic on election day.
As the recent crash of Canada’s immigration website shows, unforeseen events can cause a surge in traffic, and sometimes increased traffic can do more harm than good. A brief website outage may usually not hurt you very much, but it can tarnish your image badly if it happens during a time of peak traffic. Fortunately, there are a few things you can do to help prevent your site from crashing if you do receive an unexpectedly large increase in traffic.
Google’s Webmaster Central Blog suggests several best practices you can implement to prepare your site for a traffic surge. Consider preparing a lightweight version of your site that you can switch to if you begin to experience a spike in traffic. Hosting forms on a third-party server can help your server cope with increased traffic loads.
Also, optimize your image files by compressing them and having them automatically adjust to the appropriate size for display on mobile devices. Consider using your server’s cache for static content, as this will help keep your server’s capacity available in high-traffic situations.
You never know when the unexpected will happen, so consider using a website monitoring service that will promptly notify you if your site goes down. The sooner you are aware of a problem, the sooner you can act to fix it.
Share this
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
4 min read Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes
Find out everything you need to know in our new uptime monitoring whitepaper 2021