Want to know how much website downtime costs, and the impact it can have on your business?
Find out everything you need to know in our new uptime monitoring whitepaper 2021



42 may be “The Almighty Answer to the Meaning of Life, the Universe, and Everything”; and according to De La Soul 3 may be the “Magic Number”; but when it comes to SaaS KPIs there’s only one number you need to keep at the front of your mind – 78.
78 is the magic number when it comes to SaaS, to predicting the MRR (monthly recurring revenue) you need to keep hitting month-in-month-out to reach your ARR (annual recurring revenue) goal for the next year. Simply subtract your target ARR from your last year’s ARR and divide by 78. It really is that simple.
So let’s give it a go. Assume your target this year is $1m ARR and that last year you hit $610k ARR. So what new MRR do you need to hit each month to reach your goal?
| 2016 ARR goal is: | $1,000,000 |
| 2015 ARR was: | $610,000 |
| Jump in ARR for 2016: | $390,000 |
| Target MRR | ($1,000,000 – $610,000) / 78
= $5,000 per month
|
Still not convinced? This is how it works. And let’s make the numbers even simpler. Let’s say you get 1 new customer each month who brings you in $1 per month. We’ll also assume that they never churn. The customer that signs up in January is worth $12 to you; the February customer $11 and so on right down to December’s customer who earns you $1. And there you have it, $78!
Can’t be bothered to do the math? Use our SaaS Rule of 78 Calculator below to work out your target MRR!
| Last Years ARR | |
| Your Target ARR | |
| Your Target MRR is |
It’s that simple. Now getting selling!
Share this
3 min read In the previous post, we looked at how alert noise is rarely accidental. It’s usually the result of sensible decisions layered over time, until responsibility becomes diffuse and response slows. One of the most persistent assumptions behind this pattern is simple. If enough people are notified, someone will take responsibility. After more than fourteen years
3 min read In a previous post, The Incident Checklist: Reducing Cognitive Load When It Matters Most, we explored how incidents stop being purely technical problems and become human ones. These are moments where decision-making under pressure and cognitive load matter more than perfect root cause analysis. When systems don’t support people clearly in those moments, teams compensate.
4 min read In the previous post, we looked at what happens after detection; when incidents stop being purely technical problems and become human ones, with cognitive load as the real constraint. This post assumes that context. The question here is simpler and more practical. What actually helps teams think clearly and act well once things are already
3 min read In the previous post, we explored how AI accelerates delivery and compresses the time between change and user impact. As velocity increases, knowing that something has gone wrong before users do becomes a critical capability. But detection is only the beginning. Once alerts fire and dashboards light up, humans still have to interpret what’s happening,
5 min read In a recent post, I argued that AI doesn’t fix weak engineering processes; rather it amplifies them. Strong review practices, clear ownership, and solid fundamentals still matter just as much when code is AI-assisted as when it’s not. That post sparked a follow-up question in the comments that’s worth sitting with: With AI speeding things
4 min read Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes
Find out everything you need to know in our new uptime monitoring whitepaper 2021