StatusCake

The Impact of Website Downtime on SEO

When your website goes offline there are many factors for you to consider as a priority, such as the loss in traffic and sales and/or leads. However, another factor to consider is how a period of downtime will affect your standing in search engines such as Google.

Website downtime correlates directly with lost rankings in search, which in turn leads to a long-term loss of traffic for weeks and months after the initial period of downtime.

But what if your website only goes offline for a short period of time, would you still be facing a decline in your hard-won search rankings? In this article, we take a look at how Google analyses and interprets downtime, and what some of the senior figures at Google have said on how downtime relates to SEO.

Website crawling and Googlebot

When your website goes offline, how will Google know there is an issue?

Google indexes websites for its search engine using Googlebot – a website crawler bot that collects data from the web to add to Googles’ search engine.

If your website is experiencing a period of downtime when Googlebot comes to crawl your site, it will be met with the same error code as an end-user, probably a 500 internal error response.

Just as an end-user will react negatively to a website that is unavailable, so too will Googlebot. A study by SEO specialists Moz found that intermittent 500 internal server errors caused tracked keywords to plummet in search, often dropping out of the top 20 completely.

The study also found that affected pages eventually received fewer crawls per day, suggesting that Googlebot visits pages less frequently, the more often it is served a server error. This means that the damage done by downtime increases exponentially the longer the period of downtime lasts.

Google’s user-friendly mission

Why is it the case that websites which are frequently offline decline in rankings over time?

The most obvious answer is Google’s obsessive focus on providing users of the search engine with the best possible user experience.

The Google search algorithm calculates the rank of pages for any given search term based on hundreds of ranking factors, each of which is weighted according to the importance placed on them by Google. Many of the most important ranking signals, such as page speedmobile-friendliness, and bounce rate are factors that help to evaluate the overall experience a particular page is likely to provide to the end-user.

As such, a site that is frequently found to be offline by Googlebot will be evaluated as serving a sub-optimal user experience, and this will eventually be reflected in its ranking in search if the issue is not addressed in time.

What Google says about website downtime

Top Google employees are often pretty tight-lipped about search engine ranking factors, so when they do speak, the SEO world listens.

Matt Cutts was one of the most senior employees in the search team at Google and has provided a lot of insight into how downtime can affect your rankings. Cutts explained that if your website is down for just a day then there is unlikely to be any negative impact on your search rankings. However, an extended period of downtime, stretching over a number of days and weeks, could result in your website losing search rankings for the simple fact that Google does not want to send a user to a website that is frequently offline.

Cutts also said that they make allowances for websites that are experiencing sporadic downtime, with Googlebot generally returning to a site that was offline 24 hours later to see if it is back online.

Google engineer John Mueller offered a slightly different perspective. He claims that search rankings will see a period of flux, lasting 1-3 weeks, after just one day of server downtime. As long as downtime lasts no longer than a day, rankings will return to normal levels following the period of flux.

The delay in rankings returning to normality is accounted for by Googlebot having to recrawl the site, and making a judgment on its stability. It is for this reason that pages or websites which are offline frequently decline so significantly in rankings over time as Googlebot will reduce its crawl frequency and may even de-index a site if it believes it is offline permanently.

In conclusion, both in studies from third parties such as Moz, and in the words of senior Google employees, website downtime can wreak havoc on your hard-won search rankings. This is not an absolute rule, however, and allowances are clearly made for websites that experience a short period of downtime, although it may also be the case that even in this scenario rankings may fluctuate for a period of weeks before returning to normality, However, it is websites which experience sustained periods of downtime which are most liable to see a significant drop in search rankings, something which appears to increase in severity the longer a website is found to be offline.

StatusCake provides a suite of uptime monitoring tools that are easy to set up and use and provide you with insights you need to prevent website downtime. Our free plan includes a range of free tools, including page speed monitoring, while our paid plans include SSL Monitoring, Server Monitoring, Domain Monitoring, and Virus Scanning.

Share this

More from StatusCake

Designing Alerts for Action

3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams

A Notification List Is Not a Team

3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years

Alert Noise Isn’t an Accident — It’s a Design Decision

3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.

The Incident Checklist: Reducing Cognitive Load When It Matters Most

4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already

When Things Go Wrong, Systems Should Help Humans — Not Fight Them

3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,

When AI Speeds Up Change, Knowing First Becomes the Constraint

5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.