Skip to content

A little knowledge

The uproar over the ‘failure’ of the pre-general election opinion polling offers marketers an apt reminder of the need for caution when considering research data
Far from letting us down, the pollsters did us a favour
Helen Edwards

Helen Edwards has twice been voted PPA Business Columnist of the Year. She has a PhD in marketing, an MBA from London Business School and is a partner at Passionbrand.

On 19 June, in the light-filled lecture theatre of the Royal Statistical Society, an independent inquiry got under way to explain the divergence between the pre-election polls and the eventual UK general election result.

The panel of nine statisticians, sociologists and political scientists, chaired by Professor Patrick Sturgis of the University of Southampton, is expected to take until March 2016 to report its findings on behalf of the inquiry’s sponsors – The British Polling Council and The Market Research Society.

To go to these lengths to locate “the causes of the discrepancy” is testament to the research industry’s hubris, as well as its angst. “How could we be so wrong?” might as well be the title of this inquiry, for all the academic sobriety of its proceedings. The expectation was to be right. Failure is seen like those one-in-a-million engineering or transportation catastrophes that are the proper subjects of inquiries.

Yet, far from letting us down on election night, the pollsters did us a favour. They reminded us of the limitations of human ability to read and understand complex systems. They reacquainted us with the virtues of humility and caution in the face of seemingly incontrovertible data.

The great thing about an election is that it is fast and unequivocal in its corroboration, or overturning, of assumptions. A single day and it’s done; we discover what it was we didn’t know, no matter how strongly we felt that we did.

In our humbler sphere of marketing, we are less fortunate. Research findings are ossified into ‘learnings’ where they remain in PowerPoint charts, unchanged and unchallenged, right through to board decisions that could be business-critical.

They thought they knew...

Consumer behaviour can sometimes confound even the most confident, research-based predictions, as these examples show:

Tesco: The troubled retailer used to boast how much intimate detail it knew about its customers, based on its analysis of 5bn Clubcard data-points every week. Yet that wealth of information failed to predict those customers falling out of love with the weekly mega-shop – leaving Tesco’s out-of-town hypermarkets vulnerable to internet shopping and the value offers of more conveniently located rivals.

New Coke: cokeIn the early ‘80s, with evidence that taste was behind a decline in market share, Coca-Cola developed a sweeter formula. In almost 200,000 blind product taste tests it consistently beat both Pepsi and the original drink. But when New Coke was launched in 1985, a backlash from loyalists quickly eroded early gains. Coke Classic was hurriedly reintroduced; the two coexisted for a while, but by 2002 the unloved New Coke was withdrawn everywhere.

General election ’92: This year’s election isn’t the first the pollsters have got wrong. In 1992, with a tired Tory party limping on after 13 years in power, almost every poll predicted a clear Labour win. Despite, or perhaps because of, John Major’s ‘soapbox’ canvassing style, the Conservatives stunned the nation by achieving a 21-seat overall majority.

Refutation, if it comes, is diffuse, and can take years to play out. By that time, if things aren’t going too well, no one sits there asking: “Hey, do you think that research we did way back to get us here was actually flawed?”

The time to ask that question, then, is at the outset, when research is being judged, or, better still, commissioned. Conduct the inquiry before you inquire, focusing on the ways in which your chosen methodology may be studded with the asterisks of doubt.

In quant, online questionnaires are now the predominant commercial route to illumination. Yet digital deceit is such a cultural norm that we take it for granted: idealised Instagramming, avatar identities, fake Twitter accounts, the artifice of the “presentation of self in everyday life”, to borrow Erving Goffman’s pre-digital, but extraordinarily prescient, book title.

Your research partner will seek to reassure you that these biases are accounted for – and perhaps the subject of your probing doesn’t warrant too much ‘idealising’ on behalf of your consumers. Even so, as with any question-based methodology, you could do worse than recall Freud’s dictum that “we are largely invisible to ourselves.”

In qual, despite the wealth of methodologies to hand, focus groups still account for the lion’s share of the marketing research budget. Their known limitations read like a side-effects list on a pharmaceutical leaflet: artificiality of surroundings, anchoring, moderator bias, order effect, bound by context and time.

By far the most serious is the one uncovered by the behavioural economist Cass Sunstein. He showed that a group will tend to exaggerate any slight bias that was there at the outset. So pernicious is this that, by the end of the session, the overall group bias will be more extreme than that of the single most-biased member beforehand.

What is the answer? It is not to abandon research altogether, but to do it less often, better. At the very least, that means challenging the pronouncements of research specialists more than marketers typically do now, and demanding to know precisely how ‘interpretations’ have been arrived at.

At best, it means embracing the academic ideal of ‘triangulation’, where different methodologies are interleaved to help identify underlying themes. If, say, co-operative enquiry, ethnography and conjoint analysis all seem to point to a common motif, you might be on to something.

The research industry likes to think it suffuses our decision-making with the light of understanding, but the reality is more like pinpricks of light emerging into our cave of ignorance. If there are enough of them, and if they come from different directions, then we can pick a few features out in the gloom – all the while reckoning that it could look very different tomorrow.

A little knowledge is all that’s possible. No one needs an inquiry to point that out. Thinking we understand when we don’t, acting as though tiny clues were clinching evidence – well, that is the dangerous thing.