Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



The UK’s Information Commissioner’s Office, the ICO, announced on Friday last week that it had served Google with an enforcement noticerequiring the search-engine giant to delete all so called “payload data” it collected as part of its Street View project.
The ICO, the UK’s privacy regulator set up with a remit to “uphold information rights in the public interest, [and] promoting openness by public bodies and data privacy for individuals”, gave Google just 35 days to comply with the notice.
Although Google have escaped a fine over this data breach, according to the BBC the breach failed to “meet the level required to issue a monetary penalty”, failure to comply with the enforcement notice is a serious matter. And according to Stephen Eckersley, the ICO’s Head of Enforcement, would be considered as contempt of court; a criminal offence under UK law.
This latest, and likely to be final ruling over Google’s UK 2010 Street View activities comes after a protracted investigation. The initial investigation was only instigated after Google itself admitted itself on its official blogthat its Street View cars had accidentally collected information from unencrypted Wi-Fi networks.
At the time the ICO believed that the payload data didn’t contain any “meaningful personal details” and that the data couldn’t be linked to identify any individuals. Although the ICO decided to take no action, the US Federal Communications Commission, the FFC, carried out its own investigation – the findings of the FFC and its report ultimately fining Google $25,000 for deliberatively obstructing and delaying the FFC’s investigation. The German authorities likewise took a dim view to Google’s activities. Imposing a fine of £128,000, the maximum fine permissible under German data and privacy laws, the regulator claimed that the Street View data collection was one of the “biggest know data protection violations in history.”
It is the public of the FFC report, and in particular the concerns raised about the actions of the Google Engineer who developed the Street View data collection software, that prompted the ICO to re-open its investigation.
The ICO itself felt that the lack of proper supervision of these engineers, including an audit of the Street View software code to determine exactly what it did, was a serious procedural and management failing. Although Google had intended to map the location of Wi-Fi networks, a piece of code created by Google Engine Marius Milner has the unintended consequence of also collecting vast amounts of private data being sent by individuals over their unsecured Wi-Fi connections.
The ICO’s investigation fell short however of stating, as others have, that Google has a corporate policy of collecting as much information as it can about individuals, and dealing with the issue of privacy and laws if and when they got caught.
In a shot across the bows of Google, the ICO said that going forward it would take a “keen interest” in the operations of Google and would “not hesitate to take action if further serious compliance issues [came] to its attention.”
The pan-European investigation by data regulators into whether Google’s privacy policy adequately and clearly explains to users how their information is being collected and used across Google products and services continues. It is understood that the UK’s ICO will be shortly writing to Google to explain its initial findings.
James Barnes, StatusCake.com
Share this
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
Find out everything you need to know in our new uptime monitoring whitepaper 2021