That’s a paraphrase from my former colleague and good friend, Simon Rogers, the editor The Guardian’s Datablog. Simon is a true champion of best practise using data in journalism, and here is what he had to say to Chris Elliot, The Guardian’s Readers’ Editor:
First, I think there’s a cultural challenge. Many journalists are arty types who traditionally have thought of anything to do with numbers as ‘research’, rather than journalism. That’s combined with an unwillingness to ask difficult questions about data, or read the notes that get attached to spreadsheets [that journalists receive]. This doesn’t apply to everyone – the specialists know all about this.
I applaud Simon for his frankness. As Suw and I talk about frequently, many of the issues that we deal with in our work are relate to culture. I wrote recently how journalists’ identity is often a barrier to the adoption of technology, and in some ways, technology and statistics are lumped into the same bucket by a number of journalists (unless you’re a stats-junkie sports journalist).
Chris also interviews Ben Goldacre, author of the Bad Science column in The Guardian, for his column about statistics. Ben hits on one of the issues that drives me nuts about my profession, statistics inflation. Sure, we can shout to the hills about a 200% CORRECTION 100% (thanks Vincent) increase in the number of children who have been killed by albino elephants in zoos, but if that dramatic increase is from one child to two children, it’s not really a story. As a journalist, I can spot statistics abuse from a mile off, and I tend to think that many readers can too. Big percentages are always a tip off, especially if the reporter obscures or leaves out the actual figures entirely. Ben also raiss the issue of dealing with relative risk.
Another issue raised in the article is basic innumeracy in journalism. It’s shocking to see how often journalists conflate mean with median or use mean when it’s actually not representative or skewed by outliers. Mean is a simple average whereas median is a the middle value in a set of values. Depending on your set of numbers, mean and median can diverge by quite a lot. It’s not a hard and fast rule that one is better than the other. It’s worth checking the distribution of values first to decide which one is more representative of the data, and reasonable people can disagree about which one is more representative as Chris Elliot points out in the piece. The problem is that too few journalists know the difference.
There is a comment on the article from a biological scientist that is worth reading:
May I just ask why Journalists don’t have to study a minimum level of statistics before they are employed?
Don’t you people have to have some common level of understanding of the world you live in before you describe it.
And the:-
“who traditionally have thought of anything to do with numbers as ‘research’, rather than journalism.”
I take (that) means (sic) your writers value writing more than they value understanding what they write about.
I can see why journalists score as well as they do on respect polls.
I’m a biological scientist, I have have had (sic) to study statistics and ethics as part of my training.
I have to take and pass courses on toxins/hazards, clinical ethics, animal ethics and quite a few other courses, every year.
If I were to treat choice of mean, medium or mode as a matter of personal choice I would be torn apart by my referees.
With the number of choices that people have for information, we journalists need to step up our game. We do need to do more to understand the world we live, ask tougher questions and be more serious about the flood of numbers that inundate us everyday. Again, as a journalist, I know when a reporter has written around a hole in their reporting, relating to numbers or not. It’s pretty easy to spot. (I’ve had to do it myself.) It’s foolish to think that our readers can be duped so easily.