Married to the Metrics.
If you work at an Internet Economy business, you’ll be hard pressed not to have encountered data-led design. Whether you’re digging into quantitative data in your analytics app of choice, or using quantitative tools like F2F user-testing sessions, the choices we make are influenced by a wealth of data points.
How we use that data to decide the success of our work is just as important as how we use it to decide what to work on. Choosing the wrong metrics has the potential to kill a good idea, and leave you knowing less than when you started.
To illustrate this, I’d like to use the example of Wells Fargo. At one point Wells Fargo was the world’s largest bank, but fell from grace when it was discovered that their employees had been creating fake accounts for customers to boost their sales figures. Over 2 million of these fake accounts were made, forcing Wells Fargo to settle for $110 million, and fire 5,300 of their staff.
For Wells, new accounts was their chosen metric for success, and they pursued it aggressively. Unfortunately, there’s only a certain number of new accounts that any one branch can open — which meant that bank employees needed to find another method to hit their targets. As Dr Ian Malcolm famously said in Jurassic Park ‘life, uh… finds a way’.
Wells’ staff were incentivised to follow this single metric beyond all others, they were given bonuses for opening new accounts, and their jobs were put at risk if they didn’t meet their targets. Helping their customers was secondary to this single minded goal.
So it is with design — we judge the success of our designs on the metrics that are relevant to our business. Conversion, email opens, basket value; they’re all important business metrics, but as the saying goes ‘when all you have is a hammer, everything looks like a nail’.
This is not an exaltation for you to justify your designs with whatever metric you have to hand, but it’s important to dig deeper into your data. If your ‘big number’ goes down, look into why — are users interacting with something else on the page? Do they bounce, or are they getting what they need earlier and easier?
At Skyscanner we ran an experiment that changed our search form into a single line. We assumed this would make them easier to use, and improve the number of people performing searches; a nice simple test to run. When we ran the experiment we saw a drop in searches, which (by our own metrics) would’ve been a reason to call it a failure.
Instead, we poked into the data and found that they were interacting more with our city tiles further down the page. These were people who wanted to browse visual representations of a destination, rather than have to fill in a form. As a result we were able to invest time and effort to make that content even better, and not force our users to behave like we wanted them to.
There’s a recurring joke around our office, that if you wanted to move our main business metrics you should just make a big button that says ‘Free Flights’. But the irony is that by wedding ourselves to a single success metric at all costs, we can work towards that exact thing. Having the courage to push deeper into the results of your work will give you the confidence to make better decisions later, and help your users to do the things they want to do, not what you want them to do.