Skip to content

Marketing by numbers

Even simple metrics like NPS can require forensic analysis and interpretation
Marketing metrics have a tendency to tell you how you’re doing without telling you what to do
Helen Edwards

Helen Edwards has twice been voted PPA Business Columnist of the Year. She has a PhD in marketing, an MBA from London Business School and is a partner at Passionbrand.

Marketers at US budget airline JetBlue had a conundrum. Net promoter scores (NPS) from passengers departing from a single gate at Philadelphia Airport were consistently well below the average of NPS scores for the airline overall. What was going on at that gate?

NPS is a simple metric. It takes the proportion of those who would enthusiastically recommend the brand to others and subtracts from it the proportion who would definitely not – either side of a ballast of neutrals in between.

It is very much a brand metric, capturing, in theory at least, all the practical and associational facets that contribute to the overall consumer experience. JetBlue’s scores were generally strong – so naturally, the airline’s marketers looked at reasons for how the brand might be under-performing at this one departure point.

Everything that the airline offered at that Philly gate was rigorously assessed – but the team could find nothing different in the way that its staff, systems or punctuality performed compared with its other airport gates. It was all classic JetBlue.

They could have accepted the NPS finding as a statistical quirk – since variance around any average will tend to include outliers – but they didn’t. Instead they looked at the broader context surrounding their flights from that gate, and noted that it included one that took off every day at 6am.

At that time of morning, the airport’s shops and cafes were yet to open. Customers couldn’t buy a coffee or a muffin to pick them up from their early start. Could this grumpiness deriving from a failing at the general airport level spill over to the specific airline scores?

Seemed it could. When the airline introduced a refreshments service at the gate for that bleary-eyed flight, scores picked up and normalised. They fixed a poor, brand-specific metric score caused by something outside the scope of their brand.

Complex interactions

The JetBlue story – unveiled at a US conference last month in a presentation about data analytics by SAP – illuminates two important truths about marketing metrics.

First, they have a tendency to tell you how you’re doing without telling you what to do: that bit comes down to marketing department detective work.

And second, even though the metrics themselves might be simple, what they measure exists within something infinitely more complex.

You can have a bunch of metrics, each accurately assessing a single facet, but all missing the interactions between them. And yet, as Nassim Taleb observes, in complex systems it is the interactions that really count, with the result that “the ensemble behaves in ways not predicted by its components”.

A business-to-business brand competing in a dynamic market might constitute an “ensemble”. A simple metric within that might be sales data by customer. A common finding deriving from that might be that 80% of sales come from just 20% of the customer base – the so-called Pareto principle in action.

Beyond the obvious

Professor Byron Sharp has shown that this 80:20 Pareto skew is less common in consumer brands, but in my experience it is alive and well in the B2B world. And, at an anecdotal level, I can include my own consultancy business in that.

The finding, whether revealed by your own observation or by an outside analyst, gives you an uncomfortable feeling that you should be doing something about it.

But what? Look for maximum efficiency by cutting off the 80% customer tail that’s bringing in just 20% of revenue? Treat those smaller customers a bit like second-class citizens so you can focus better on the big ones? Try charging smaller customers more?

The conclusion that would be dangerous to miss is that what you should probably do is nothing. And the reasons for that derive from a combination of detective work and a recognition of Taleb’s dictum.

Once you stop seeing the two types of customer as distinct groups, and look for the interactions between them, you get a feel for why you have any big customers at all. I’ll use my own business as an example, since it’s just about the only one for which I have not signed an NDA.

How did the customers in the big-but-few cluster get to be there? Some started as small customers but grew large – either organically or through merger. Some had become led by individuals who had previously worked in one of our smaller customer businesses. Some were there through recommendation, including by smaller customers.

It’s not hard to see the impact on the overall business if the “small-but-many” crowd had been rejected early on for size, or been served relatively poorly. There wouldn’t be any big ones.

Viewed like this, Pareto distribution can be seen as a healthy sign, not something to be fixed, which perhaps is why it is observed so ubiquitously, not just in the commercial sphere but in nature, too – including by the eponymous Italian economist Vilfredo Pareto, who noted that 20% of his garden pea pods produced 80% of the peas.

Keeping it simple

Marketing is not short of candidate metrics and, given the data-driven bias around corporate board tables, neither is it short of reasons to employ them.

And there is no shame in veering towards the simplest possible metrics, especially if they can more elegantly produce the data-point accuracy of the fancier models.

But there is a danger in extrapolating from that methodological simplicity the notion that the interpretation and implications are also simple. They rarely are. Just as painting by numbers will not get you much of a picture, so branding by numbers will not get you much of a brand.

Metrics matter. But forensic interpretation and imaginative action – or even, sometimes, considered inaction – add up to more.