How we score OKRs at FreeAgent

  • Daniel Ilosfai
  • Danny Williams
  • Metaka Mfm
This content was originally published at: https://medium.com/@freeagentapp/how-we-score-okrs-at-freeagent-3c50bebdeb22
^Image cred: Ellie Morag

Objectives and Key Results (OKRs) have become an invaluable planning framework at many companies, improving clarity of priorities, ensuring focus and helping alignment across teams.

We’ve been using OKRs at FreeAgent since 2013, and we’ve learned a huge amount about what to do (and what not to do) if you want to be successful with them.
This post isn’t about how OKRs work or why to use them, but rather one aspect that we’ve found to be particularly challenging as our company grows: scoring OKRs. Regularly scoring OKRs allows us to track progress, and change priorities or resourcing if we feel we’re not going to achieve an objective.
In the past, different departments and teams all had slightly different interpretations of how to set, measure and score OKRs, meaning it’s been difficult to get a consistent picture of progress and impact across the company.
To resolve this we’ve developed a framework for scoring OKRs that reduces much of the subjective guesswork and makes it easier for people to score them easily and consistently. Every company is different of course, but here’s what works for us.
If you’re familiar with the Google ethos of scoring OKRs then you’ll see this is subtly different but, after much tinkering and experimentation, it’s what works for us.

Getting to grips with the basics

So you know the drill. We have a qualitative, inspirational objective that motivates the team and clearly describes what we want to achieve, and 3–5 key results that quantitatively describe the measurements of success for that objective.
Here’s what it might look like:
Objective: Customers love our new mobile app!
Key results
  • Mobile app available in iOS and Android app stores
  • 10,000 downloads by the end of the quarter
  • Average 4 star reviews
  • 30% monthly active users
At FreeAgent, OKRs are set quarterly across the company and each key result is scored between 0 to 10 at various points throughout the quarter, and recorded in a company-wide Google Sheet. 0 means a total fail and a 10 represents an epic win, but there’s a surprising amount of nuance in scoring as we’ll see.


Impact or task?

The first step is recognising that there are generally two kinds of key result:
  • Impact key results: key results that describe the quantitative impact of the work e.g. ’24 enterprise sales’ or ‘20% reduction in churn’.
  • Task key results: key results that describe an action or output e.g finish project X or launch feature Y.
In the above example we see both types:
  • Mobile app available in iOS and Android app stores << Task
  • 10,000 downloads by the end of the quarter << Impact
  • Average 4 star reviews << Impact
  • 30% monthly active users << Impact
OKR purists will argue that you should only ever set impact key results, and that task key results are an anti-pattern to be avoided at all costs. This is sound guidance but in my experience it’s not always possible to articulate every objective in terms of neat quantifiable, measurable key results.
If you’re working on a project that spans multiple quarters for example, then it may be impossible to understand the impact of this work until beyond the initial quarter. You can sometimes think of alternative measurements that are indicative of future impact (positive feedback on a prototype for instance) but this just isn’t always possible.
Ultimately it comes down to your commitment to OKR dogma, but at FreeAgent we take a slightly more relaxed approach to this, and allow task key results if measuring impact isn’t practical within the timeframe.
Ideally though, task key results should be the exception and not the rule.


Impact OKR scoring

So hopefully we set all of our key results in terms of the measurable impact we hope to achieve. In order to score these key results we use the following formula:
(Impact x Confidence) x 10 = key result score
Where impact and confidence are scored between 0 and 1 as follows:
  • Impact = the impact we think we’ll achieve at the end of the quarter, relative to the original key result(e.g. If we think we’ll achieve 5000 downloads from an initial target of 10,000 then impact = 0.5)
  • Confidence = our confidence level that we’ll achieve the above impact by the end of the quarter(e.g. if we have 80% confidence that we’ll achieve 5000 downloads by the end of the quarter then confidence = 0.8)
So if we were 80% confident in achieving those 5000 downloads our score would be:
(0.5 × 0.8) × 10 = 4
At the start of the quarter our impact score should always be 1 as we’ve just set the target, however as the quarter progresses this may change as we’ll see in a minute.
Our confidence score will also change throughout the quarter but will always end up as 1 by the end of the quarter. Regardless of what impact we achieved there’s no more uncertainty to be had and the confidence is 1.


Task OKR scoring

When scoring task key results we use the same scoring system:
(Impact × Confidence) × 10 = key result score
The confidence score works in exactly the same way as it does for impact key results, but the impact measurement is slightly different.
You could argue task completion is binary. Either you do something or you don’t, so you would either score 1 or 0. I’ve found this to be unhelpfully reductionist.
In reality there’s a big difference between missing a project completion date by a couple of days and never even starting the project. Recognising this can help with team morale as well as progress visibility across the company.
Because of this, when it comes to task OKRs we use a scale to describe the impact we think we’ll achieve by the end of the quarter:
Again we score the confidence and impact at different points throughout the quarter based on how much of the task we think we’ll achieve by the end of the quarter and our confidence level.
For example, if we’re halfway through the quarter and we now have 90% confidence that we’ll manage most of the task, then we would set the impact at 0.7 and confidence at 0.9, so the score would be:
(0.7 × 0.9) × 10 = 6
The important thing about task key results is that we score based on the likely task completion by the end of the quarter, not on the progress to date. This isn’t an exact science by any means but it allows teams to score task completion at the end of the quarter without resorting to a simple yes or no.


When to score OKRs

At FreeAgent we score our key results at four points during the quarter:1
The first score is where we set the key results at the start of the quarter. As a result our initial impact should always be 1.
The final score happens at the end of the quarter and as there’s nothing more that can be done at this point, regardless of the impact achieved, the confidence should always be 1.
We record scores in a Google Sheet which contains the OKRs for the entire company, and it’s useful to see how the scores change from month to month.

How ambitious should your key results be?

Given the key result scores are a product of the confidence and impact it’s clear that the same score could result from different parameters, e.g.
Key result:
  • Achieve 10,000 downloads
Possible parameters:
100% confident of achieving 5000 > Score = 0.5
or
50% confident of achieving 10,000 > Score = 0.5
I’ve seen legitimate scenarios where it was clear that one of these was more appropriate than the other, but often it’s a judgement call based on our level of ambition.
I usually like to set confidence levels around the 70%, at least at the start of the quarter, which naturally provides guidance as to the anticipated impact. The 70% mark feels ambitious enough without being too unrealistic, which is important if other people (or the business as whole) are relying on you to hit your goals.
Having said that, there are definitely times when we score key results differently. Take a look at the following plan.
Given we’re starting Project 1 at the beginning of the quarter, followed by Project 2 and then Project 3, it’s fairly obvious that we should be more confident about completing Project 1 and getting the impact we want. If Project 1 or Project 2 slip then it’s unlikely we’ll ship Project 3 within the quarter and so our confidence around any impact here should naturally be lower.
As a result, at the start of the quarter we might score the key results:
Objective: Successfully launch 3 projects
Key results:
  • Impact from project 1 (9/10)
  • Impact from project 2 (7/10)
  • Impact from project 3 (5/10)
Here it’s clear from the scoring that we’re less confident about completing project 3 and achieving the resultant impact — it’s a stretch goal and our score should reflect this.


Try it yourself

As I mentioned, we record all of our OKRs in a company-wide spreadsheet. This includes tracking the impact and confidence for each key result over the four check in points, and it automatically calculates the score based on the formula above.
If you’re interested in trying our approach to OKR scoring then you can download a copy of our OKR scoring template and give it a try for yourself:
We’re always keen to hear your feedback — drop us a comment below and let us know how you get on.

Written by Roan Lavery Chief product officer, FreeAgent


This post is part of a series of blog posts going behind the scenes at FreeAgent, digging into our design and development processes, sharing the lessons we’ve learned and openly discussing the challenges we face. See more details here.

FreeAgent are happy to support The Dot’s creative community by offering an exclusive 10% discount! Go to www.freeagent.com/partners/thedots to claim your discount (and try FreeAgent completely free for 30 days - no credit card required).