Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



Researchers at the Fraunhofer Institute in Germany have come up with a novel way of fighting eBook piracy with a trick that pays homage to more old-fashioned techniques for protecting copyright works.
The project “Secure Documents by Individual Marking”, or SiDim, is a type of digital rights management (“DRM”) which takes the original eBook and slightly alters the text for each copy of the book sold, so that each copy is ever so slightly different – and unique.
The project which has secured back from the German government, as well as the media and publishing industry, doesn’t use technical measures to try and block copies being reproduced. Instead because each book is unique with its own fingerprint or DNA, should that book later turn up on a file-sharing site for example, the copyright owner can immediately tell who the book, which is being shared, was sold to. The theory is that people will simply not upload and share their books for fear of being caught.
SiDim will change words within the book – for instance it may change the word “little” for “small” or “not visible” to “invisible”, perhaps the punctuation may be slightly changed or the grammar and construction of the sentence altered.
How authors will in practice react to this remains to be seen. For some authors even subtly changing the words, punctuation or construction of sentences could be a step too far. Even if it doesn’t dramatically change the book, they may feel that this is altering their artistic work.
Researchers, keenly aware that attempting to stop copying by technological means simply becomes an arms race, have realised that the better approach is simply to allow copying, but make it clear that if you do copy without permission you’re likely to get caught.
These copyright-traps, or copyright Easter-Eggs, are similar to those often used by cartographers to “trap” anyone who copies their map. A “trap-street” for example is a made-up street name, which does not exist – or perhaps is depicted on the map differently to how it should be, so that if the map is copied by a third party its immediately obvious where it’s been copied from.
There are many examples of copyright Easter-Eggs including on Google Maps – just one example being Moat Lane, Finchley, London N3 in the UK which does not actually exist. These are not confined to maps though – dictionary compilers also use them. The New Oxford American Dictionary — includes an entry for the fictitious word “esquivalence.”
So do these copyright traps work? Absolutely, the use of “esquivalience” – which fittingly was said to mean “the wilful avoidance of one’s official responsibilities” was enough to entrap Dictionary.com.
Main Image – The “town” of Argelton appears in the middle of a field in Lancashire, UK on Google Maps but does not exist. The clue is in the name – “Not Real G”
Share this
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
4 min read Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes
Find out everything you need to know in our new uptime monitoring whitepaper 2021