Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Researchers at the Fraunhofer Institute in Germany have come up with a novel way of fighting eBook piracy with a trick that pays homage to more old-fashioned techniques for protecting copyright works.
The project “Secure Documents by Individual Marking”, or SiDim, is a type of digital rights management (“DRM”) which takes the original eBook and slightly alters the text for each copy of the book sold, so that each copy is ever so slightly different – and unique.
The project which has secured back from the German government, as well as the media and publishing industry, doesn’t use technical measures to try and block copies being reproduced. Instead because each book is unique with its own fingerprint or DNA, should that book later turn up on a file-sharing site for example, the copyright owner can immediately tell who the book, which is being shared, was sold to. The theory is that people will simply not upload and share their books for fear of being caught.
SiDim will change words within the book – for instance it may change the word “little” for “small” or “not visible” to “invisible”, perhaps the punctuation may be slightly changed or the grammar and construction of the sentence altered.
How authors will in practice react to this remains to be seen. For some authors even subtly changing the words, punctuation or construction of sentences could be a step too far. Even if it doesn’t dramatically change the book, they may feel that this is altering their artistic work.
Researchers, keenly aware that attempting to stop copying by technological means simply becomes an arms race, have realised that the better approach is simply to allow copying, but make it clear that if you do copy without permission you’re likely to get caught.
These copyright-traps, or copyright Easter-Eggs, are similar to those often used by cartographers to “trap” anyone who copies their map. A “trap-street” for example is a made-up street name, which does not exist – or perhaps is depicted on the map differently to how it should be, so that if the map is copied by a third party its immediately obvious where it’s been copied from.
There are many examples of copyright Easter-Eggs including on Google Maps – just one example being Moat Lane, Finchley, London N3 in the UK which does not actually exist. These are not confined to maps though – dictionary compilers also use them. The New Oxford American Dictionary — includes an entry for the fictitious word “esquivalence.”
So do these copyright traps work? Absolutely, the use of “esquivalience” – which fittingly was said to mean “the wilful avoidance of one’s official responsibilities” was enough to entrap Dictionary.com.
Main Image – The “town” of Argelton appears in the middle of a field in Lancashire, UK on Google Maps but does not exist. The clue is in the name – “Not Real G”
Share this
6 min read The Real Cost of Owning Monitoring Isn’t Code — It’s Everything Else In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff
5 min read AI Has Made Building Monitoring Easy. It Hasn’t Made Owning It Any Easier. A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything
3 min read In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting
3 min read In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
Find out everything you need to know in our new uptime monitoring whitepaper 2021