StatusCake

Google Glass – The All-Seeing, Every-Recording Eye

ssl monitoring

Gadget fiends got their first real-life sight of Google Glass in January this year when Google founder Sergey Brin was caught riding the New York subway wearing a pair of Google’s new glasses.

Since then the excitement about this cool new gadget has been tempered somewhat, by an equally loud backlash from some technology and privacy commentators who argue that we should be concerned about the new level of engagement and interactivity with our “real world” that Google Glass offers, not embracing it.

Google Glass, a pair of glasses with a miniaturized web-camera and browser, allows you to walk along the street – or indeed wherever you are – and like a fighter pilot, have a head-up display that allows you to surf the web, get email alerts and social media updates; much of which is voice controlled.  All very exciting surely?

But what seems to have got everyone concerned is the ability for you to record everything that is going on around you.   Of course we can already record the world around us with our smart-phones.  Most breaking news events already rely on footage shot by members of the public – so called citizen journalists – who first reaction to any event tends to be to grab the smartphone first, and help later!

So surely the ability to simply record content can’t be the issue for privacy advocates?  Nor surely does it stack up to say that when someone records on the smart phone it’s more “obvious” – you can spot who is filming you.  I’m not sure that anyone wearing Google Glass is going to blend into the background – in the short term at least.

The excitement around google glass

Perhaps it’s that people are more worried about the sinister way in which Google may act alone in using Google Glass data.  After all Google is a data company.  It lives and breathes data – and whilst it argues much of the time that the data it collects is simply used to improve its search engine algorithms, it does seem to have form for simply grabbing data – whether it has permission or not – with a view to storing it, even if it hasn’t at that point in time decided what it wants to do with it.

Critics of Google, and Google Glass, point to the way in which Google Books started scanning thousands upon thousands of books, many of which were still in copyright, without the permission of the author.  They point to the Google Street View project.  After the first Street View project got off the ground many home owners “removed” themselves from the project, however Google having just republished fresh images, appears to have ignored home owners “opt-outs.”  That combined with Google’s intercepting of passwords and other personal and sensitive data from WIFI networks during Street View has put Google on a collision course with regulatory authorities in countries such as Germany, and more recently the UK’s Information Commissioner’s Office.

So the argument seems to be – it’s not about being filmed per se that people are concerned about – after all in the UK we have almost 2 million CCTV cameras (whilst CCTV is less common in the US it’s use is growing – New York has around 3,000 cameras and Chicago 10,000) – but that CCTV is perceived to be there to protect us from crime.  We hope at least that if we’re caught by a CCTV camera going into Tesco, we’re not suddenly going to find that information appearing on our Facebook page, or marketing from Tesco or its competitors coming through our letter box like confetti.

And that is the crux of the issue for privacy campaigners.  Will vast streams of information be used to market to us.  Will we suddenly find that status updates about us appear on social monitoring, even though we’ve not posted them ourselves?

Beyond the fears of becoming targets for real-time personalized advertising, will Google Glass lead to a change in behavior?  As Google Glass becomes more widespread will people think about their about when they’re in public for fear of being filmed?  And if so is that a bad thing?  Might we see more people queuing patiently in a store, more people giving up a seat for an elderly person on a train?  That’s great – surely that makes Google Glass the nudge to a better society!?

Well maybe not quite that.  But what’s for certain is that although they’ll almost certainly be headlines about individual cases of privacy breach from Google Glass, individuals in Europe claiming Google Glass breaches their right to privacy under Article 8 of the European Convention on Human Rights (otherwise known as pay-day for lawyers!), and the on-going battle between Google and government regulators, don’t think that it will spell the end of Google Glass.  They’re here to stay – and will be hugely popular as prices comes down.  And for many individuals, particularly those who are younger, sharing information and using social media is a deeply ingrained part of their lives.  They’ll embrace Google Glass not reject it.

James Barnes, StatusCake.com

Share this

More from StatusCake

Designing Alerts for Action

3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams

A Notification List Is Not a Team

3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years

Alert Noise Isn’t an Accident — It’s a Design Decision

3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.

The Incident Checklist: Reducing Cognitive Load When It Matters Most

4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already

When Things Go Wrong, Systems Should Help Humans — Not Fight Them

3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,

When AI Speeds Up Change, Knowing First Becomes the Constraint

5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things

Want to know how much website downtime costs, and the impact it can have on your business?

Find out everything you need to know in our new uptime monitoring whitepaper 2021

*By providing your email address, you agree to our privacy policy and to receive marketing communications from StatusCake.