Random number syndrome is amazingly frustrating. It’s the ability to present information without any real frame of reference so it sounds impressive even when it isn’t.
It’s used a lot in PR and marketing to make something sound impressive but in actual fact is a bit pointless. We’ve all seen adverts for face cream that assert 90pc of (just 80) respondents agreed that it softens the appearance of wrinkles but you don’t know how they phrased the question and what the average is for marketing surveys, which ask the question in the best way to get the desired result.
And it’s used in PR reporting too. I’ve worked in rather too many agencies, both big and small, and they all have tended to say what they’ve done (releases / articles / interviews etc) and how many clips they’ve generated. There’s no frame of reference to say if that number of clips is good and there certainly isn’t any attempt to say if this had a good effect on the business. This is starting to change and many agencies have become more focused on creating campaigns that have objectives, with measurable outcomes. There’s even an industry body, AMEC, that promotes this and it’s one I’m proud to say I’m a member of.
So, away from coverage (and the widely discredited – and often made up equivalent advertising value) what else can you do. Sales figures is obviously the best, but agencies rarely have this information and it’s susceptible to many events, from the weather on down. Incoming calls – perfect, but once again we don’t have these. So web traffic, which can be mapped and broken down by Google Analytics.
I recently had the opportunity to go back to my scientific routes for the last 6 month analysis and ask the questions:
- Is what we’re doing driving traffic?
- Does translating releases drive coverage?
- Does translation releases drive traffic to the website?
- Does article placement drive traffic?
- If so, what publications drive this traffic?
So, earlier this week I swapped death by PowerPoint for death by Excel (well, Google Docs and Google Analytics) and mapped the effect properly – see graph. This is just a subset (and I’ve stripped out all bar search and direct traffic) of the data but shows what drives traffic against the baseline. While this is data for a specific client (so I can’t give numbers etc) it shows a really good causal effect for spikes for some articles and not for others.
The first large red spike is two roundtable pieces in Electronic Design and the newsletter that came out two days after the coverage caused traffic to double on that one day. Similarly, three bylined articles in EE Times and Power Systems Design caused traffic to more than triple on this day. And other magazines (I won’t name) caused no rise in traffic – see right hand side of graph. By mapping this we can now target the effective titles more easily and not waste time or money on others.
There were also great spikes on releases and we could even see when the company had put out a newsletter without telling us (or sending it to us).
The other exceptionally interesting correlation we mapped was release translation and localised article placements and how they drive traffic. As you’d expect, translating into German and French drove traffic significantly. But, while translating into Swedish, Danish and Dutch led to an increase in coverage vs the releases that we didn’t translate (a 1400pc increase overall, this has been calibrated for how each release performed in English speaking countries)… and there were spikes in traffic from these countries on these days… we saw the same spikes in traffic for non translated releases too. The conclusion… countries like Denmark, Sweden and Holland speak English well enough to read the English media and this – rather than local language material – is causing the coverage spikes.
So we’re adjusting what we do in the programme.