The role of belief in ebook pricing and what to do about it

(Cross posted from Chocolate and Vodka. Please comment there!)

So yes, I know it’s nearly Christmas Eve and I know I should be turning my brain off, but this blog post about ebook pricing by Declan Burke came across my radar today on Twitter (and yes I know I should have turned Twitter off too) and I couldn’t not reply.

Declan writes about his experiences with pricing the ebook version of his novel, Eightball, which he says started off at $1.99 and ended up at $7.99. He also briefly mentions the different pricing structures from publishers, and discusses the attitudes of some readers who appear to think that all culture should be free.

But the main bit of Declan’s post that caught my eye was his discussion of cost and value:

The other odd thing, from a personal point of view, is exemplified by the drop-off in sales for EIGHTBALL BOOGIE once its price started to go up. The e-book fan (or anyone with even the vaguest grasp of economics) will very probably be screaming right now at the screen a variation on, ‘It’s the economy, stooopid.’

I understand that. I really do. But from my point of view, EIGHTBALL BOOGIE is the same book regardless of whether it’s $1.99 or $7.99: it’s not a quarter as interesting, or funny, or thrilling, at the cheaper price, and it doesn’t come in at 25,000 words rather than 85,000 words.

It’s not my place, by the way, to say that EIGHTBALL is interesting, funny or thrilling. I’m just saying that whatever qualities the book had at the $1.99 price, those qualities remain the same regardless of whether I charge $7.99 or give the book away for free.

I suppose my central concern, when it all boils down, is that fans of e-books are confusing cost and value. That’s not to say that very good books aren’t being sold for $1.99, or $0.99, or even being given away free. But it’s patently self-limiting for a reader to impose an arbitrary price of (say) $4.99 on a book, and state that he or she refuses to pay any more, regardless of the quality of that book.

Unfortunately, I fear that Declan confuses inherent value with market worth, and the two are very different indeed. As writers, we would all like to think that our work has inherent value. The blood, sweat and tears that we leaked all over the page should, we tell ourselves, be valued by others as much as it is by us.

But the price that the public is willing to pay has little to do with any sense of inherent value; it is directed by what price the market will support. When it come to deciding what price we put on our ebooks, it is not sufficient to think about our concept of inherent value. We would all love our ebooks to sell by the shedload at a nice, high price. (And if we’re famous, they might well!) But for most of us, we should instead be striving to understand which price will maximise our profits. If we sell thousands at £1.79, is that going to bring in more profit than if we sell hundreds at £5.99?

And this is where almost every single blog post and news article I’ve seen on the subject falls flat on its face. The horrible, uncomfortable, inconvenient truth is that for independent ebook sellers and small publishers, we have no clue whatsoever as to what price will maximise profits. We just do not have the data. We have a few anecdotes from both ends of the spectrum, from the “I sold $millions” so the “I sold sweet FA”, and a very little from the middle where people are selling “enough”, for whatever value of enough they care to assign.

What we don’t have is what the big publishers have: Numbers. It’s impossible to compare the sales of a handful of books at different prices and draw any meaningful conclusions, because the books are not equivalent goods. My novelette Argleton is not equivalent to anyone else’s book because it’s not a perfect substitute.

If you’re in the market for a hammer, one is pretty much a perfect substitute for another. If I buy a hammer from Shop A, I am not going to buy a hammer from Shop B. But books are not substitutable goods. If someone buys Argleton, that doesn’t mean that they then don’t have any interest in buying Eightball.

Even comparing sales of the same title over time is more complex than saying “It sold a lot at $1.99 but nothing much at $7.99”, because market conditions change. It’s only in the large-scale aggregate that the numbers starts to provide genuine information. And sadly, that kind of data isn’t available to the likes of independent and small publishers.

So what do we fall back on? Belief.

I believe that my biggest problem right now is that not enough people know about my writing. My sole purpose is to introduce as many people as feasibly possible to Argleton in the hope that they will like it and be interested in my future work. That means that I believe that giving away Argleton for free is in my best interests.

But I also ideologically believe that free goods do not necessarily cannibalise the sales of the same goods offered commercially. We have some interesting data from people like Cory Doctorow, Lawrence Lessig and Tom Reynolds that even if they don’t increase sales, CC-licenced copies of books do no harm to sales either. For them.

Of course, things could be different for other authors or other genres but again, the truth is that we simply don’t have enough data to say one way or the other.

Additionally, I believe that me giving away my books free has no impact on what someone is willing to pay for Eightball, or any other book, because the two are not substitutes. I’ve heard the argument that authors who give away their books are undermining authors who sell their books, but I’ve not seen a jot of evidence, or even logical reasoning, to support that point. The book market is not a zero-sum game.

And I disagree with Declan over the idea that giving away books is a “race to the bottom”.

For now it seems that many authors are happily collaborating in a race to the bottom on price. The mantra is very much quantity over quality, to the extent that many writers, in a desperate bid to get noticed and put one foot on the bottom rung of the slippery ladder, are now giving away their books for free.

There’s a certain kind of logic to this, although it only exists inside the e-publishing bubble, which appears determined to eat itself. Because once you give away one book for free, the expectation is that all your books will come at no cost, an expectation that derives from an entirely understandable mentality that runs, ‘Well, if you don’t value your work, why should I?’

I’m a teeny tiny sample, but by this logic no one should buy the Kindle version of Argleton, but they are. By this logic, no one should ever buy any of Cory Doctorow’s books, but they do. And also, by this logic, no one should ever give good, honestly earnt money to a nobody writer on the promise of delivery of a book, which could be fundamentally shite, and with absolutely no guarantee that they are going to get what they paid for and then, knowing all that, actually pay more than the book itself is worth. And yet, they have.

Our beliefs are sculpted by our experiences and our ideologies. My experiences appear to show me that giving books away whist also selling them, and tapping into an amazing community of generous supporters to achieve the publication of a physical book not only works, it is profitable. My belief is that people will happily pay for books that they like and that those who pull the “culture should be free” line out of their arse are the same people who would not have bought my book anyway, so there’s simply no sale lost.

But, just like Declan, I lack hard data.

This, sadly, means that rather than eating our own young, independent authors and small publishers are doomed to chase our tails, cherry picking the case studies to fit our ideologies and rejecting the points of view of those who disagree with us.

There is only one cure to this: Independents need to have a standard set of data that we all regularly submit to one big database which we can then pull reports from. We need, collectively, to share what numbers we each have, because that’s the only way we’re going to get the kind of scale we need to turn anecdotes into data. And data is the only way we’re going to get meaningful insights into how book buyers really behave.

We can’t afford to fanny about getting all ideological and relying on our beliefs to determine our business strategies. My biggest worry about my current strategy is that I could be horribly, hideously wrong, but I have absolutely no way of testing my hypothesis on my own. If I am wrong, then I will change my strategy immediately, because I’m not interested in proving myself right. I’m interested in creating a new career for myself where I get to live comfortably and make up stories for a living.

The best Christmas present you can give a new author: An Amazon review

(Cross posted from Chocolate and Vodka.)

Last month there was a great blog post by Anne Allen about how important Amazon reviews are to new authors:

[…] Amazon reviews, which were only mildly significant three years ago, now have a make-or-break impact on an author’s sales.

When you’re buying an ebook, there’s no helpful bookstore clerk to tell you what might be appropriate for your nine-year old niece, or if there are any new cozy mysteries you might enjoy, or whether the new Janet Evanovich is up to her usual standards.

Instead, you check reader reviews and Amazon’s “also bought” suggestions. These are all generated by consumers, which gives the ordinary reader immense power.

The post then goes through some really good guidelines for people who might want to leave an Amazon review for an author they like. It’s well worth a read, even if you’re familiar with Amazon, because Anne gives a very clear idea of how the whole review system works.

I didn’t quite understand the power of Amazon reviews until I started publishing in the Kindle stores. I have books available now in six stores:

The only store in which I have any reviews so far is the UK store and sales in that are way ahead of every other store, even the US store. Now admittedly there are potential language issues in the French, German, Spanish and Italian stores, as the buyers there might not be so interested in an English language book. But that shouldn’t be the case with the US and, in fact, the majority of my Kickstarter supporters were from the US so in theory I should have a good showing there. But so far, I do not.

I think this is down to reviews. I have three good reviews so far on Amazon UK, none in the US. It’s a shame that reviews don’t cross-pollinate stores, but there we go.

So if you’re feeling generous this festive season and you have read a book by a new author that you liked, it would be a wonderful thing for them if you took 10 minutes to write even a short review, or just give a star rating. Four and five star ratings are particularly useful as Anne explains:

Anything less than 4 stars means “NOT RECOMMENDED.” Don’t expect an author to be pleased with 2 or 3 stars, no matter how much you rave in the text. Those stars are the primary way a book is judged. Without a 4 or 5 star rating, a book doesn’t get picked up in the Amazon algorithms for things like “also bought” suggestions. Giving 1 or 2 stars to a book that doesn’t have many reviews is taking money out of the author’s pocket, so don’t do it unless you really think the author should take up a new line of work.

If a friend asks you to review something you found amateurish, or wasn’t your cup of tea, just tell her you don’t feel you can review it. That happens all the time and we appreciate it.

On the other hand, a 4-star review that recommends the book even though you have a few reservations, will earn you eternal gratitude from the author.

In fact, 4-star reviews can often be the most helpful. If a reader sees something like, “I loved this mystery, but the humor is pretty farcical. If you’re looking for a standard whodunit, this isn’t it,” or “this is awfully intellectual for something called chick lit.” Those offer honest information to buyers, without telling them not to buy.

I’m not saying you shouldn’t be giving 1-3 star reviews. I’m just saying that on Amazon (not all review sites) 3-Stars is usually taken as a negative rating. If you intend to be positive, then 4 stars will better convey that sentiment.

This was certainly something I hadn’t really thought about in detail before reading Anne’s post.

So if you have a favourite author who’s either just starting out or hovering around in the midlist, why not take a few moments over the Christmas holidays and leave them a review?

Kindle sales stats: a paucity of information

(Another cross-post from Chocolate and Vodka.)

As a newbie to self-publishing, I find myself transported back a decade to the time when I was so obsessed with my blog traffic stats that I made a spreadsheet and noted down what events caused spikes in traffic. After a while I lost interest in the numbers, but now I’m back to tracking thems, although the patterns are very familiar to me and rarely am I surprised by what I see.

I’m also now obsessing over my Kindle sales statistics. And yes, I have a spreadsheet which notes both sales through the Kindle store and free downloads from here. If you’re curious, to the end of November I had given away 6140 downloads of Argleton and sold 27 ebooks via the Kindle store, netting me a royalty of approximately £30. Well, we’ve all got to start somewhere.

But where it’s relatively simple for me to track downloads and traffic to this site, tracking my Kindle sales is a laborious process. Amazon’s stats pages are… well I can’t use the word “designed” because that would imply that some thought had gone into them, and it’s clear that’s not the case.

As you can see from this screenshot, you don’t get much information. This is the page for the UK shop. If I want to see reports from the other shops, I have to pick from the dropdown list. And if I want to look at last month’s sales, I have to click that link. Very tedious.

Amazon.com: Kindle Direct Publishing: My Reports

Worse, if I don’t keep a spreadsheet of my monthly sales, I lose access to that data as Amazon only gives me this month and last month’s. And there appears to be no way to go back further than that prior month.

Now then, if I want to see my royalties, then I can see those not monthly, but weekly for the past six weeks. Eh? Why give me sales by month and then royalties by the week for only the past six weeks?

Amazon.com: Kindle Direct Publishing: My Reports

Now, if I don’t grab this data, I can at least go do that third link down and download monthly spreadsheets from the previous 12 months. Except this is what those spreadsheets look like:

kdp-report-10-2011.xls

It’s a complete mess. I’d have to spend so much time doing basic spreadsheet cleaning before being able to process this in any way, it’s just not funny. Imagine if I was selling lots of different books: The spreadsheet would become unworkable.

Nowhere does Amazon give you an at-a-glance summary of your sales, or graphs showing how you’re doing over time, or an easy way to download properly formatted raw data. Is it really that hard to take a bunch of numbers, generated preferably in real time, and present them in a usable, sensible way?

What’s also frustrating is that I have absolutely no context for my buyers. Where are they coming to Amazon from? Are they finding me here on this blog and then clicking through to Amazon? Searching for me or Argleton on Amazon itself? Coming from some other site? Finding me from some other page on Amazon, eg recommendations on another book?

Amazon knows, but it won’t tell me. And without that information I can only see half the picture. I don’t know how to direct my promo efforts. Should I be blogging more here? Should I focus on pimping to book bloggers? Should I be Tweeting more? Facebooking? I have no clue, and I will never find out.

It’s great that new authors like me can sell our books without having to find a traditional publisher (not that I’d turn one down if it made sense!), but Amazon could do a much, much better job of providing stats. Surely it’s in their interests to do so, as the more successful I am as an author, the more money they make off me?

Sadly I hold out precisely no hope whatsoever of useful change, so I’ll just have to keep checking back every month and writing the numbers down in my spreadsheet. What a nerd, eh?

Lessons from Kickstarter Part 1: Don’t go off half-cocked

(I’m writing a bit more often over on Chocolate and Vodka at the moment, so thought I’d cross-post the highlights here)

The last 18 months has taught me a lot about Kickstarter and putting together my own self-publishing project. This is the first of a series of blog posts in which I’ll go through what I’ve learnt, partly in case it’s of interest to anyone else but also to codify it in my own head so that, hopefully, I won’t make the same mistakes again. So, herewith Part 1!

If there was one overarching lesson that I’ve learnt doing Argleton, one thing that I really wish I’d thought of 18 months ago, it would be this:

Don’t go off half-cocked

Whilst there’s some truth to the idea that ignorance is bliss and that if I’d known what I was taking on I perhaps wouldn’t have done so, I think there’s more truth in the idea that I would have saved myself a lot of pain if I’d planned things better. Instead I bouncily assumed that it couldn’t possibly be that much work and that I’d have the whole thing done by the end of the summer. In 2010. Whoops.

So here are a few thoughts on how to make sure you’re fully prepared before you launch your Kickstarter project.

1. Finish as much of your project as possible
I naïvely thought that I could finish writing and editing Argleton whilst the Kickstarter fundraiser was underway, but promoting the campaign took more effort than I had anticipated, leaving me not much time to write. This had serious knock-on effects: Because I didn’t know how long the story was going to be, I couldn’t get accurate quotes for printing and so my rewards were priced by roughly guessing. I’ll go into budgeting issues in another post, but suffice it to say that guessing is a Very Bad Idea.

Another impact of having not finished up as much as I could was that it lengthened the time between people pledging support and my delivering my book to them. My ‘deadline’ for sending out the books just kept slipping and whilst most people were very patient, a couple sent me rather sharp messages questioning my commitment. I have to say that stung, but I could have avoided it if I hadn’t gone off half-cocked.

I should have had the book finished, critiqued, edited, typeset and converted into multiple digital formats, with all my rewards properly designed and fulfilment planned before I even considered launching my Kickstarter project.

2. Understand how much of your project remains
You can’t always finish everything up front. Had I hired someone to design my cover, for example, I would not have been in a position to do that until the Kickstarter money came in. That’s fair enough, but make sure that you know exactly what tasks are outstanding, how you are going to complete them and how long they are going to take. This allows you to be up front with your supporters about what’s left to do and how long they’ll have to wait for the finished thing.

3. Complete the design and prototyping of your rewards
Another really time-consuming part of the project was designing and prototyping my rewards, the books. Whilst they were easy to describe in text, they turned out to be difficult to turn into a reality. I learnt that I am not a natural graphic designer and that my ideas about what would work as a cover in print and in silk were very difficult for me to realise. The silk cover in particular went through about nine prototypes all together.

Had I gone through that process before launching my Kickstarter project, I would have learnt early on that I needed the help of a designer and I could have worked that into the project costs. I also would have realised how difficult the silk cover would turn out to be to actually make and just how long each one would take. I might still have gone ahead, but it would have been with eyes open.

4. Get your suppliers lined up
This is important not just for budgeting, but also to save you time when it comes to getting everything done and sent out. The first printer I looked at turned out to be incapable of doing the job in the way that I wanted: They didn’t have experience making books and didn’t have the right kind of binding technique which meant that when you opened the book, the pages fell out. Not really the result I was aiming for.

Finding a new printer, briefing them, and going through more prototypes was time consuming and set me back by months. In the end Oldacres did an amazing job, and I will be using them again on my next project so the relationship I formed with them is important, but I could have got there sooner. (Especially as they were actually the first recommendation I had had. :/ )

5. Understand your incompetencies
Obviously, I like to think I’m a half-decent writer, so the task of finishing and editing the story was easily doable. I’m also quite good at typesetting, having done that professionally in a different incarnation. But what I hadn’t really banked on was the fact that I’m a shit graphic designer and an even worse puzzle writer.

Not only did my weaknesses slow the project down (I’m still finishing of the puzzle, for example), they also made everything unnecessarily difficult. Had I looked at the puzzle before I launched, I would have realised how much effort it was going to be and might even have questioned whether it was even needed. In retrospect, I think the inclusion of the puzzle or geogame was more a statement of my own lack of confidence than a genuine contribution to the project.

6. Understand your dependencies
I hate to say it, but I should have Gantt-charted the project and thought hard about what was dependent on what. I wasn’t always clear on what could be done in parallel and what had to be done in order, and so I often defaulted to doing things in serial, thus delaying the project further. Partly that was a psychological thing: It felt easier to deal with one set of related problems at a time, rather than trying to solve issues on multiple fronts simultaneously. There’s no doubt at all that drastically slowed me down.

Had I sat down and worked out my dependencies, I would have been able to prioritise my to do list better. I would also have known when I needed to make educated assumptions, and what I would have to find out in order for those assumptions to hold water.

One good example is calculating postage. I hadn’t finished the story, so didn’t know how long it was, so didn’t know how many pages it would be, so couldn’t figure out the weight or find the packaging and so couldn’t make even a vaguely informed calculation as to the likely cost of postage. As it was, it cost a lot more than I had anticipated, as did the printing come to think of it, and I was lucky that I had raised more than I needed so didn’t actually lose money.

7. Don’t overcomplicate things
As I mentioned above, the geogame in the end turned out to be more of a gimmick that I hoped would get people interested rather than integral to the storytelling. Whilst I have done my best to produce something that is enjoyable, the fact that it has only now reached the testing stage shows just how difficult I have found it. I could have done without it and, if I had, I don’t think the project would have suffered at all.

Whilst most of the rest of the Argleton project was relatively simple, if time consuming, I did apply this rule to what was going to be my next project – a story told through the medium of a newspaper, complete with fictional character profiles, classifieds and sports page. I still love the idea, but during the planning process I realised that it was actually a very complicated project that would require collaboration with a number of people. I’m not ready to do that yet, although I will definitely be keeping that on my list of projects to look into when I’ve got a better flow of money coming in from my ebooks.

My aim in all of this is to produce a small but growing body of work, both electronic and in various physical media, which can give me an income. To this end I need to ensure that future projects are doable in a much, much shorter timespan than Argleton. Taking two years to do a novelette is not sustainable, so future projects will be much, much simpler and will hopefully complete more quickly.

Next time: How to think about your rewards.

When commenting systems go bad

Just recently, one of my favourite blogs moved a new home on Wired and, in the process, moved to the Disqus commenting system. I’ve sat in many meetings where Disqus has been named as the desired commenting system. I have often found myself on the fence, preferring, say, the built-in WordPress commenting system over any third party system, but still understanding that the issues with managing very high volumes of comments can encourage companies to outsource them. Until recently, though, I hadn’t had any real in-depth experience of using Disqus as a commenter.

I have now. And I have discovered that Disqus kills conversation and frustrates users.

The problems with Disqus surprise me, because they’ve been around a while and I would have expected them to understand how online discussions actually work, and adjust their tool to facilitate conversation. Instead, Disqus quashes conversation. Here are the issues, and possibly a few solutions:

Comment display is broken
There has long been a debate in commenting circles about whether threaded comments or flat comments are best. The truth is, neither are better than the other, both have their strengths and weaknesses. But Disqus, or at least the installations of it that I have recently seen, do not provide an option to view comments in a flat, strictly chronological or reverse-chrono order.

When you have a rich and fast-moving conversation in blog comments, threading kills it because it is nigh-on impossible to know where the new comments are in the various threads. An option to show comments in a flat view would allow users to quickly see which comments are most recent. We are smart enough to thread the conversations we’ve read already in our memories, but wading through threads in order to find the one new comment is a chore no one will bother with.

This means Disqus kills conversation in big, complexly-threaded discussions.

Being able to easily switch between views would be even better, so that you can find the newest comments, but then switch to see them in context of their threads.

Comment paging is broken
If there’s one thing that drives me nuts about Disqus it’s that there is no “view all” option. On my favourite blog, I have to page through comments in chunks of 40 at a time and, once the thread gets over 80, it becomes very tedious on page reload to have to re-page through to the newest comments if I want to actually see them in reverse-chrono order. My only option is to then view them newest-first, which means I have to then find the join, which is again a pain in the arse, especially if when I last looked there were 100 comments, and now there are 200.

I recently saw a blog post with 900 comments, which were only accessible in pages of 10. If anyone thinks that people are going to bother to page through all those comments, ten at a time, they need a reality check. It’s already hard enough to get people to read comments before they write their own, but this just encourages drive-by commenting, which is very bad for conversation and community-building.

Disqus needs to have a “view all” option. I don’t care if it takes a minute or two to load, I just want everything, on one page, so that I can scan it at speed to pick out the comments I care about.

Other issues:
Login kills comments. On the train into London this morning I wrote a comment, then realised that I wasn’t logged in. I logged in with Google, as I usually do, and Disqus threw away my comment. WTF? Really? That’s how you treat logging in?

Newest first is weird: Newest first also does really weird stuff with within-thread threading which I haven’t get got my head round, but it bloody annoys me.

Page refresh breaks flow: On a lot of commenting systems, if I refresh the page in order to fetch new comments, the browser will remember where I am on the page and all I need to do to continue reading is, well, continue reading. Not with Disqus. Refreshing the page essentially resets Disqus, meaning that I have to re-page through everything and search for my place. A comment bookmarking system might help with this, or they could just offer a persistent single page view.

Just say No to Disqus
I have to say, I would now actively militate against clients using Disqus if they have any desire to create conversation and community. Disqus frustrates passionate readers, drives away interested but less committed readers, and makes genuine conversation difficult or impossible. It seems to be a great system for collecting comments to be ignored, but it’s terrible if you actually care about your comments or your commenters.

Given that Disqus has been around since 2007, the fact that it hasn’t cracked comment display yet is shocking to me. I honestly thought they of all people would have nailed it. Quite the opposite, in fact: Their design can only be described as user-surly.

Scientists should not be given the right to fact check the press

Last week, Dr Petroc Sumner, Dr Frederic Boy and Dr Chris Chambers, all from the School of Psychology at Cardiff University, argued in the Guardian that “Scientists should be allowed to check stories on their work before publication”. The only sensible and rational response to that is “No, they should not.”

Sumner, Boy and Chambers argue that scientists should be given veto because:

  1. Scientific papers are peer reviewed so have already been scrutinised by independent eyes, implying that journalists therefore don’t need to be themselves independent when reporting such papers.
  2. There are no political parties in science, ergo there can be no ‘conspiracies’
  3. Scientists have nothing to gain and a lot to loose from exaggerated claims in the press
  4. It’s the only way to ensure accuracy

What utter tosh.

There is no doubt at all that quite a bit of science journalism is appalling, riven by inaccuracies, biases and sometimes just complete twaddle. You only have to read Ben Goldacre’s Bad Science to see what a mess journalists get themselves into. The problem is that Sumner, Boy and Chambers are engaging in special pleading on the basis of four flawed premises:

Firstly, scientific papers may well get peer reviewed, but that doesn’t mean that they are correct, it simply means that they have been looked at by some other scientists who either can’t find or won’t find fault. Papers get retracted when problems come to light later on, so peer review is not a guarantee that a paper is correct.

Secondly, the idea that there are no lines to toe in science is utter bunkum. There may not be political parties but there certainly are scientific orthodoxies, and that means lines to toe. The fact that something has become orthodoxy does not, in and of itself, guarantee that it is correct. Prevailing theories do get overturned when new evidence comes to light and, whilst those who make extraordinary claims need extraordinary evidence to back them up, science has a rich history of exactly that.

Point three only needs to be answered with one word: Ego.

And point four shows a spectacular misunderstanding of the journalistic process and the factors that cause errors and misinformation to propagate. Let’s take a quick look at some of these (I’m sure there are more, please do add them in the comments!):

  1. Good journalists on tight deadlines have little time and few resources to do comprehensive reading and research on a story, so mistakes,  misunderstandings and inaccuracies can easily creep in to the work even of the most dedicated.
  2. University and research institution press releases, and sometimes even scientists themselves, can be misleading. Sometimes those inaccuracies are picked up by the journalist, sometimes they make it through to press.
  3. Some hacks and editors don’t actually give a shit whether something is accurate, they simply want a shocking or outrageous story that they think will get them lots of readers.
  4. Some hacks have financial relationships with the companies whose “science” they are writing about, destroying any vestiges of impartiality they might once have claimed.

Which of these scenarios would actually be helped by adding an extra layer of “fact checking” by scientists? In the first case, the journalist doesn’t have time/resource so the fact checking just isn’t going to happen. In the second, the fact checking would be undermined by the very people doing it as they would only propagate their own inaccuracies. And again, in the third and fourth scenarios, facts are irrelevant, so why would they get checked?

There are things that would help, however:

  1. Additional time and resources for science journalism and appropriate training . This is frankly never going to come from the news organisations themselves because they are all struggling to survive and science is seen all too often as a minority sport. It might be that a well-respected science communications charity or NGO could fund training for journalists wanting to cover science, e.g. in how to interpret papers and how to understand statistics. They would also need to fund the journalist’s actual work, ensuring that they had the time and resources required to do a good job.
  2. News organisations need to take complaints about inaccuracies more seriously. Even the so-called quality newspapers don’t always pay any attention to readers who point out problems in science stories. Often, then will officially stand by the most egregious bullshit because they’d rather not have to deal with the fact that they got it wrong.
  3. Some sort of standards commission with real power should hold all news organisations to account, forcing them to make corrections and imposing significant fines for the most egregious misbehaviour. I’d say “Maybe the PCC could do this sort of thing”, but they’re a spineless, toothless waste of time. If there was any censure at all of misinformation in the media, some of which is actively harmful to the reader’s health, maybe this conversation wouldn’t be dragging on and on for years, as it actually is.
  4. Oh, and here’s an innovative one: Maybe university press departments should stop sexing up press releases, liaise more tightly with their own scientists to get their facts right, and provide any relevant photos, graphs, graphics and data in reusable formats with a clear and concise explanation of what they illustrate. Of the press releases covering scientific topics that I’ve had to work from, none gave me even half of what I needed.
  5. Scientists should be responsive to press enquiries, and should prepare FAQs, lists of relevant links and re-usable quotes about their research up front. Their papers should be available, along with a proper bio and even photos, just in case. Whilst most of the scientists I’ve dealt with have been very responsive, none have sent me a link to a website with background info and relevant resources on it that I could use to quickly bring myself up to speed before asking them specific question. It would have saved everyone’s time if they had done that. (Plus it’d be a great thing for me to link to in my articles!)

Science journalism isn’t actually a collaboration between scientist and journalist, it’s a process of interpretation which depends on both sides being independent of the other.

So what would happen if Sumner, Boy and Chambers got their wish?

Well, where there’s orthodoxy there’s the opportunity for new ideas and voices to be suppressed when they come into conflict with established bodies. Although science is supposed to be immune to this, it does sometimes – thankfully rarely – happen. If there are no independent science journalists, there’s no opportunity for new voices and evidence to be heard.

When scientists get it wrong, which they do, we need science journalists like Goldacre need to be able to criticise their methodologies, assumptions and conclusions free from interference. If Goldacre had to pass everything he wrote past the people he was writing about, he would get almost nothing published.

In short, we would see the wholesale marginalisation of dissent, and not just dissent from the journalist, but from opposing scientific voices too. It would be, in short, a disaster for science and science communication.

(Hat tip: Glyn Mottershead.)

Emily Cummins – my Ada Lovelace Day Heroine

Today is Ada Lovelace Day, the annual celebration of the achievements of women in science, tech, engineering and maths. As the Tweets flow thick and fast and the new website holds its ground, it’s time for me to think about my own contribution.

This year I have chosen Emily Cummins as my Heroine. At just 24, Emily has already won a number of awards and accolades because of her work on sustainable tech. She was named one of the Top Ten Outstanding Young People in the World 2010, won the the Barclays Woman of the Year Award in 2009, and was Cosmopolitan magazine’s Ultimate Save-the-Planet Pioneer 2008.

One of Emily’s most notable inventions is an evaporative refrigerator that doesn’t need electricity, for use in developing countries for the transport and storage of temperature-sensitive drugs. But it’s not just her inventiveness that makes Emily a great role model – it’s her willingness to tinker, try things out, and invent. And that is something she puts down to having been supported in her tinkering as a child. She said in this interview with Female First:

I had a really inspirational granddad who gave me a hammer when I was four years old! We used to spend hours together in his shed at the bottom of the garden, taking things apart and putting them back together again. By the time I started at high school it meant I already understood the properties of different materials and how certain machinery worked. I’d always had a creative spark and because it was encouraged from an early age I suppose I had the confidence to take it forward and start inventing for myself.

There’s a very valuable lesson there to anyone who has daughters, granddaughters or nieces: Give them hammers, screwdrivers and, when they’re old enough, power tools. Encourage them to spend time in the garden shed or the garage with you, learning not just how to take things apart, but how to put them back together again. It’s through playing with technology – both hi-tech and lo-fi – that we learn how it all works, and once we know how it works, we can invent.

I don’t have a daughter but I do have a niece, and I love buying her the science and technology kits and toys that no one else thinks to get for her. I know she loves her chemistry set and her electric circuitry set, and she knows that she’ll get more fun things to play with from me that she can’t yet even guess at. I hope that that, as she gets older, she’ll remember how much fun she finds them and will carry on thinking of herself as someone who can do science and tech, and won’t give in to boring gender stereotypes.

Emily makes a great role model for girls like my niece, and young women, but also for those of us who are a little older, who deep down, just want to get out into the garden shed and start tinkering. Emily shows us just what women can achieve, given the room to experiment and invent. And we all ought to remember that it’s not too late to get ourselves a hammer and start making stuff.

The Guardian: Burning platform is burning

After The Guardian and The Observer announced its ‘digital-first’ strategy the other week, which I, like Kevin, see as a burning platform admission, Alan Rusbridger went on Radio 4’s Media Show (MP3) to talk about the situation. Listening to the interview with half an ear open, one would hear a very calm, measured response from Rusbridger that would seem to make an awful lot of sense. But listening more closely, I heard a lot of statements that worry me, because they don’t seem to jibe with reality at all.

The Media Show’s Steve Hewlett started off by asking whether there really is a cash crisis at the Guardian, and Rusbridger replied:

It was a kind of, sort of, pre-crisis moment so we don’t want to get to that crisis. We were saying that if we did nothing and continued as we were, it wouldn’t look too good. We actually wouldn’t run out of money because we’ve got lots of investments in other things, but we would run out of the cash reserves that we have.

For a long time, The Guardian Media Group’s cash cow was actually Autotrader, of which they sold 49.9% in 2007 for £674m. At the time, the Independent reported:

Carolyn McCall, the chief executive of GMG, said: “The basis of all our investment is creating a sound financial basis for The Guardian. It’s all about the long-term security and independence of The Guardian. It’s a great position to be in.”

GMG is owned by the Scott Trust, a not-for-profit organisation set up to safeguard The Guardian “in perpetuity”.

Ms McCall said the money would be spent “very wisely and carefully over a very long period of time”, suggesting perhaps 50 or 60 years.

But “wise” and “careful” are not words that one could use to describe the next big deal GMG did, which was to buy the debt-laden Emap in 2008 with private equity firm Apax re-valuing the deal less than two years later:

GMG and Apax bought Emap for £1bn in 2008 but the business has been squeezed by the recession and weakened by a high debt burden which costs £50m a year in interest payments.

[…]

Apax has written its investment in Emap down to zero and, while GMG has not yet followed suit, it is expected to review its valuation in the next few months.

Emap has £700m of borrowings and GMG had to inject “an undisclosed amount of new cash” in Jan 2010. GMG has now written off about half of its investment in Emap, but they can’t write off the whole lot like Apex did because that would blow a hole in its balance sheet and no one wants to torpedo their own ship.

Furthermore, from Oct 2010:

Emap, the magazine, data and exhibitions business, has reported a 4% year-on-year fall in operating profit to £52m in the first half of 2010.

The company, which also reported a 4% fall in revenue to £135.5m in the first six months, said the results were “primarily due to uncertainty in the UK public sector”.

Emap’s profits continue to fall:

Emap has reported a significant fall in pre-tax profits in 2010, as government spending cuts hit revenues in its magazine publishing and conferences division.

The business-to-business publisher, which owns titles including Retail Week and events such as Cannes Lions, reported pre-tax profits of £27m for the 12 months to 31 December, according to accounts made available on Friday.

This is not a recipe for success: Emap will now struggle to cover the interest payments on its debt, and any profits from Autotrader are “ring-fenced to repay debts“. Whilst GMG has money in other investments, it’s not clear whether they have already had to raid those funds to keep going, or even whether there’s enough liquidity there for those investments to be useful.

So whilst Rusbridger is right that GMG has investments in other things, that’s not necessarily a reason to be relaxed about its finances. In 2009, The Guardian was burning £100,000 a day and reported an operating loss of £36.8m. In June’s announcement, their operating loss is stated at £33m, a reduction of just £3.8m.

“So it’s not a crisis,” Rusbridger continued, “but the point of the talk was to say that we have to do things before we get to a crisis.”

If the above doesn’t look like a crisis, then I don’t know what does. I have a lot of friends still at The Guardian, and the mood on the ground amongst many of them is that the sense of urgency one might expect to feel internally is entirely missing.

Hewlett later brought up the decline in advertising revenue, the fact that circulation is down 12% year on year, and the above mentioned cash losses of £33m. Rusbridger responded:

The big picture for the whole of the press, the whole of the market is going away at about 8%, and the Guardian is completely in line. Classified advertising has largely gone, and I don’t think that will come back and as circulations decline across the market, of course advertisers say, “Well, we want to pay you less,” and you just don’t want to get into that spiral of decline that we’ve seen in a lot of American newspapers where they then respond by viciously cutting back editorial costs, and then you have something that’s less readable and you’re in some sort of death spiral.

Everyone knows that classified have tanked, but during a recession recruitment advertising also tanks. The problem is, many of The Guardian’s pull-out sections, such as media, tech or public sector, are/were reliant on recruitment ads. Now, you’d think that with a subject like tech, where The Guardian had an excellent reputation and a great editorial team (and I say that not just because I used to freelance for the tech section), there would be plenty of opportunity to widen out the advertising from recruitment to display ads, sponsorships, events, etc. But that’s not what happened.

In 2009, The Guardian wiped out the tech section’s freelance budget and in December of that year they stopped printing tech as a standalone section. Commercial seemed to have no Plan B for the collapse of the recruitment ad market and instead of looking for one, the tech section was radically reduced. Towards the end of 2009 and through the first few months of 2010, The Guardian offered some very attractive voluntary redundancy packages and the majority of the tech section staff applied and were granted redundancy. (Disclosure: Kevin accepted voluntary redundancy from the Guardian at the end of March 2010.)

We know that focused verticals do work in digital — just look at GigaOm, TalkingPointsMemo, Engadget or AllThingsD (a sub-brand of the Wall Street Journal). They’re are doing pretty well, because they are developing key niches and are able to aggregate focused audiences who are primed for ads, but those ads have to be the right sort. General ads aren’t going to fund specialised sections, but if commercial won’t get their heads round what the right sort of ads are and then go out and get them, there’s no hope. Ads are, of course, only part of the revenue equation, but the same holds true for events, sponsorship and premium content.

This is exactly the same discussion I had with Computer Weekly, and it’s a fundamental problem that news outlets need to deal with.

The story is much the same as the Media Guardian, where arguably they had an even more bankable brand. And even with new ventures like Guardian Local, where they created great new products in three different cities and yet couldn’t capitalise on the skill of their journalists or the audiences they built.

In short, this is exactly the sort of editorial self-immolation that Rusbridger says he wants to avoid. But if you can’t capitalise on smart editorial staff and a passionate audience, what are you going to capitalise on?

Hewlett went on to ask Rusbridger about the announcement that sparked all this off: “How is digital first different to what you do now?”

Print is tremendously consuming of resource and time and energy and also the way that you think about things. So if you come in on the morning and your main concentration is on this huge thing that you’ve got to produce at the end of the day then that is going to dominate your thinking. The thing where all papers are exposed is in the innovation and the resources that we need for digital and the blunt truth is that we don’t have enough developers, we don’t have enough people who know about mobile, who know about Flash and data and multimeida so we need to get more of those and we need to spend more of our day thinking about those forms of our journalism.

Two points to make about this: Firstly, the attitude amongst some printies at The Guardian still leaves a lot to be desired. I saw in response to Kevin’s previous post a comment on a non-journalism forum somewhere on the internet from someone who said they worked as a print journalist on The Guardian. I’m not interested in pointing out who this person is, but I am going to paraphrase their stated position:

The website isn’t the newspaper. The website might be popular but it’s about being fast, about being the first to get a story up, and the standard of writing is pretty poor. In the newspaper, though, the writing is much better, probably the best in the industry.

Rusbridger needs to address the print-first attitudes, because print-first thinking results in print-first doing. He had an opportunity to shift towards digital thinking when The Guardian reworked its CMS, but instead of a digital-first system the one they ended up spending £18m on a new web CMS but with a workflow that is still print-first. Even now, even with The Guardian’s great web presence, they still have problems doing basic things like adding URLs to stories.

As for stocking up on developers and journalists with digital skills, well, The Guardian used to have some great digital journalists, but many of them have now gone. The former desk editors of Guardian.co.uk like Deborah Summers (politics), the people who either did digital journalism on the ground, like Kevin, or the people higher up the food chain who had a pretty good handle on how digital was affecting the news landscape, like Emily Bell, have moved on. From top to bottom, the digital folk have either taken redundancy, been pushed out or edged aside.

Furthermore, with lots of the digital talent and the experience they had developed gone and a culture that is still defined by print, the chances for those who remain, and those who have yet to join, to progress into senior managerial positions decreases. The Guardian, as all other news organisations, needs people who truly understand digital in senior positions, but without a pool of talent to promote, they just aren’t going to get that. Indeed, I’d say The Guardian has suffered a significant digital brain-drain and it’s going to take 10 to 15 years for digital folk now to penetrate the higher levels of management.

Rusbridger continued:

We need to get more developers in, so we need all of these people with digital skills, and we’re losing money so we need to reduce the cost base, so yes, we will need to lose some people and we’ll try and do it in a voluntary way, but we need to end up employing fewer people than we have at the moment.

At this point, it’s worth noting that The Guardian has had at least three waves of cuts over the last four years. This will be its fourth round, with more than 300 positions cut, and yet it still only managed to reduce its losses by £3.6m a year?

The final part of this interview I want to address is The Guardian’s new foray into American waters. This is their third attempt to crack America and I find the reasoning quite bizarre. Rusbridger once more:

The UK market is too small, really, we can’t grow very much more here, it’s a fairly mature market. Meanwhile, there is a huge appetite for what we’re doing in America, where now a third of our readers are, and so far we’ve done it with virtually no marketing and a tiny staff, so we think it’s not a ‘nice to have’ but essential that we cater for that market.

Firstly, the idea that the UK market is saturated is strange. We know that general news is hard to monetise, but niche markets can do pretty well, yet in terms of products produced by The Guardian’s key editorial teams, they have barely scratched the surface. As I mentioned earlier, there’s no paid tech product of which I am aware, despite the fact that this is an area where The Guardian could do really well.

Common news business models fall into some pretty discrete categories:

  • Eyeballs: You aggregate an audience and sell access to that audience to advertisers and sponsors.
  • Information: You provide information that people can use to make money or make decisions.
  • Access: You provide access to data, people, networks, etc. which allow people to make money or make decisions.

The tech industry is really very good at exploiting all three of those basic business models but with much of the activity focused on the US, The Guardian could have used its brand to prise that UK market wide open. It didn’t. Media markets are also quite geographically specific, and again, the Guardian could have dominated that market, but didn’t. So the idea that the UK market is so mature that there’s no room for expansion here is a nonsense.

Secondly, the idea that the US market is easier because it is bigger would be a terrible premise for an expensive expansion. There may be a market there, but the monetisation should come first, before the new office and staff. As it is, by opening an office and moving a few people across the Atlantic and hiring 20 to 30 reporters and editors, The Guardian pushes its break-even point much further away than it needs to. The stakes will be high in an endeavour not well augured by its forebears.

The Guardian already tried Guardian America, with a site focused on American content with American ads, and that closed in October 2009 due to “continuing changes in the distribution patterns of web content”. Why will this new venture be more successful? And what will the business model be? As American news outlets know, making money of general news over there is just as hard, if not harder, than it is here, so The Guardian will need an innovative niche strategy. ‘America’ is not a niche, not over there, anyway. And ‘Europe for Americans’ is likely to be equally tough thing to turn into a product. They are also launching the project in New York, already at the centre of a New York Times, Wall Street Journal and Huffington Post battle. Do they have the resources, much less the stomach, for such an epic media battle?

I have fond memories of The Guardian as the paper I always wanted to read, the paper I always wanted to write for, not to mention the paper that got me my first ever job. Indeed, The Guardian also gave me my first major freelance commission outside of the music press and I would have been happy to write a lot more for them had circumstances allowed. I don’t want to see it fail, and I don’t want to poke it with sharp sticks for the sake of it. But I am worried that Rusbridger is a visionary lacking in business sense, clarity and a firm grip on reality. If The Guardian is to survive, it needs someone who can lead it with a clear, commercial head. I’m not convinced Rusbridger is that person.

Direct visits: A referral data black hole

Facebook drives more traffic than Twitter” ran the headline in May, after a Pew study seemed to show that Twitter just wasn’t as good for traffic numbers as people had thought. But there were problems with the study’s methodology, as many people, including Steve Buttry said:

The PEJ report acknowledges that the Nielsen Co., the source of all the data studied, relies “mainly on home-based traffic rather than work-based,” without adding that most use of news sites comes during the workday.

and

The study uses strongly dismissive language about Twitter’s contribution to traffic to news sites. But it never notes that many – probably most – Twitter users come from TweetDeck, HootSuite, mobile apps or some other source than Twitter.com. Twitter “barely registers as a referring source,” the report concludes, ignoring or ignorant of the fact that the data counted only traffic from Twitter.com and did not count most visits from Twitter users.

As the web evolves, so the tools that we use to measure and assess activity need to evolve, but this hasn’t really happened. We might have managed to ditch the misleading idea of ‘hits’, but web traffic measurement is still immature, with many of the tools remaining basic and unevolved. But this problem is only going to get worse, as Steve’s second point hints at.

As I mentioned in this post, earlier this year I did some work looking at referrer logs for a client, OldWeather.org, a citizen science project that is transcribing weather and other data from old ships logs. One of the things that I noticed was how messy Google Analytics’ data is when it comes to finding out which social networks people have visited from. Many social networks have multiple possible URLs which show up in the stats as separate referrers. For example, Facebook has:

  • facebook.com
  • m.facebook.com
  • touch.facebook.com

And Twitter has:

  • twitter.com
  • mobile.twitter.com

So in order to get a better picture of activity from Facebook and Twitter, we need to add the numbers for these subdomains together. But that alone doesn’t provide the full picture. A list compiled by Twitstat.com in August of last year showed that only 13.9% of its users were using the Twitter.com website, with another ~1% using Twitter’s mobile website. That means around 85% of Twitter users are not going to show up in the twitter.com referrals because they haven’t come from twitter.com or mobile.twitter.com.

It is possible to get some other hints about Twitter traffic as some web-based clients do provide referral data, e.g. twittergadget.com, brizzly.com, hootsuite.com or seesmic.com. But the big problem is that much of the traffic from Twitter clients will simply show up in your stats as direct visits, essentially becoming completely invisible. And when direct visits make up 40% of your traffic, that’s a huge black hole in your data.

It used to be assumed that direct visits were people who had your website bookmarked in their browser or who were typing your URL directly into their browser’s address bar. The advent of desktop Twitter clients has undermined this assumption completely, and we need to update our thinking about what a ‘direct visit’ is.

This obfuscation of traffic origins is only going to get worse as clients provide access to other tools. Tweetdeck, for example, can no longer be assumed to be a Twitter-only client, because it also allows you to access your LinkedIn, Facebook, MySpace, Google Buzz and Foursquare accounts. So even if you can spot that a referral has come via Tweetdeck, you have no idea whether the user clicked on a link from their Twitter stream, or via Facebook, LinkedIn, etc.

This makes understanding the success of your social media strategy and, in particular, understanding which tools/networks are performing most strongly, nigh on impossible. What if 20% of your traffic is coming from invisible Twitter clients and only 1% comes from Twitter.com? Because the majority of your Twitter traffic is hidden as direct traffic you might end up sensibly but wrongly focusing on the 5% that has come via Facebook.com, thus reworking your strategy to put more effort into Facebook despite the fact it is actually performing poorly in comparison to Twitter.

I recommend to all my clients that they keep an eye on their statistics and that if a tool isn’t working out well for them, that they should ditch it and move on to another. There are so many social networks around that you just can’t be everywhere, you must prioritise your efforts and focus on the networks where you are most likely to reach your target audience. But we need to have clarity in the stats in order to do this.

The scale of this problem is really only becoming clear to me as I type this. For sites with low direct traffic, a bit of fuzziness in the stats isn’t a big deal, but for sites with a lot of direct traffic – and I see some sites with over 40% direct traffic – this is a serious issue. You could potentially have a single referring source making up a huge part of your total traffic, and you’d never know. And as more services provide APIs that can feed more desktop clients, which themselves provide more functionality than the original service itself, the growth of wrongly attributed ‘direct visits’ is only going to accelerate.

Without meaningful numbers, we’re back to the bad old days of gut feeling about whether one strategy is working better than another. I already see people making huge assumptions about how well Facebook is going to work for them, based on the faulty logic that everyone’s in Facebook, ergo by being in Facebook they will reach everyone.

Now, more than ever, we need reliable web stats so that we can make informed decisions, but these numbers are turning out to be like ghosts: our brains see what they want to see, not what is actually there. Even established research institutions like Pew are suffering pareidolia, seeing a phantom Facebook in their flawed numbers.

Understanding Grímsvötn

Another Icelandic volcano has blown its top and, as you might expect, the media has gone batshit. Even otherwise commendable publications like Nature have lost their heads and are calling Grímsvötn “the new Eyjafjallajökull” (hint: it’s completely different). So here’s a quick look at the key information sources you need to understand what’s going on.

Firstly, let’s just talk about pronunciation. Whereas I could understand the reluctance to attempt Eyjafjallajökull, even though it’s not that hard once you’re got your tongue round it, Grímsvötn is much easier. An Icelandic friend says the í is like the ‘ea’ in ‘eating’ and ö is a bit like the e in ‘the’ or the u in ‘duh’ so basically a bit of a schwa. Repeat after me, then: Greamsvuhtn. Easy. Yet despite it being a relatively simple name to pronounce, at least one BBC news presenter bottled it and said something like “A volcano in Iceland” and, instead of tackling Eyjafjallajökull said, “Another volcano in Iceland”… Wimp.

Right, so, horses’ mouths. There are plenty of them, so there’s no excuse for asking the Independent’s travel editor for comment (BBC, I’m lookin’ at you again!), who frankly probably knows jack shit about volcanos. Your key sources for Icelandic eruptions are:

1. The Icelandic Met Office
The IMO provides so much data that it’s hard to see why so many news orgs ignore it. You don’t get much closer to the horse’s mouth than this and, shock-horror, they speak English! Good lord, who’d’ve thunk it. Key pages on the IMO website:

  • News: Not updated very often, but still an important source
  • Updates: Updated more regularly, more useful info and links
  • Earthquakes: Last 48 hours worth of earthquakes. It’d be awesome if someone captured this and made a nice visualisation. And if you’re missing data, just email and ask them – they’re very nice, as I found out last year when they sent me the archival data for Eyjafjallajökull.

The IMO have a lot more data, such as tremor, inflation, and seismic moment, but it will take an expert to interpret that for you.

2. The VAAC
The Volcanic Ash Advisory Centre is run by the UK Met Office and provides maps of the ash cloud forecasts, which it updates regularly. Key links:

 

If you look at the full size version of this, you’ll see more clearly that there are three coloured lines: The blue line is labeled FL350/FL550, the green line is FL200/FL350 and the red line is SCF/FL200. The blue line is the highest part of the ash cloud between FL350 and FL550, i.e. between 35,000 and 55,000 feet. FL means “flight level” and the number is how many hundreds of feet above ground level you’re looking at. The green line is between 20,000 and 35,000 ft, which is about where jets cruise (at 33,000 ft), and the red line is between surface and 20,000 ft. VAGs are produced regularly and include four forecasts at 6 hour intervals.

The thing to remember about these VAGs is that they are forecasts based on current volcanic activity and wind forecasts, so they can and do change.

3. Regulators & air traffic control
At this stage, I’d love to say that the regulators and air traffic control bodies are a great source of info, but they’re not. That’s not going to stop me giving you their links, though.

  • UK Civil Aviation Authority. They also have a Twitter account, but haven’t yet got to grips with the idea of giving people useful information.
  • NATS: The National Air Traffic Services are giving regular updates, but it’s not particularly detailed. I’m pretty sure that the now ‘unofficial’ Twitter account was official this time last year, but either way, NATS should sort out their Twitter presence.
  • EuroControl: The EU air traffic control, also on Twitter, but doing a slightly better job of it.

I would like someone to slap the CAA, NATS and to some extent Eurocontrol round the chops and insist that they get their online acts together. They may think they have something better to do than communicate with the public, but frankly, I can’t think what it might be. At times like this, we need informed voices from the organisations making and implementing policy decisions to be communicating directly with the public, to counteract the uninformed nonsense we’re fed by our media. Right now, it’s just one great big mess of fail and it’s very disappointing. If any one of you organisations get in touch with me, I’ll go so far as to give you a discount just to see you actually start to engage properly.

4. Erik Klemetti
Frankly, Erik’s work on the Eruptions blog, gathering links and keeping us up to date with what’s happening, blows all the official sources out of the water. Erik has created an awesome community of  people who are constantly on the look out for news and information and sharing it in the comments and, from that smorgasbord, he picks the best links for his posts and provides an expert view on what’s happening as well as some highly accessible explanations. This, to be honest, is the kind of stuff we should be seeing from the UK Met Office, the CAA, NATS and Eurocontrol, not to mention the media.

5. FlightRadar24
Always a fascinating site, FlightRadar24 has now added an ‘Ash Layer’ which superimposes the current forecasts on to their radar map of all the planes currently in the air. Well worth a peek.

6. Mila
Mila have a number of webcams up around Iceland. Currently there’s one working webcam trained on Grímsvötn, and although the picture’s a bit wobbly, when the sun’s up you can clearly see what’s going on. Or not going on: Right now, there’s no plume, but that can of course change at a moment’s notice.

 

So, that gives you a bunch of sources to check when you want to know what’s going on and you can’t find any actual information in the media. And if you’re like me, you’re still left with a question: What’s going to happen with Grímsvötn and its ash cloud? It’s impossible to predict precisely, but we do know that the ash is heavier and coarser than Eyjafjallajökull’s. We also know that the weather patterns are not the same, and that the eruption is unlikely to go on for as long. So we are probably not looking at a replication of Eyjafjallajökull’s disruption. (“Probably” means that nature can still confound the most sensible of predictions!)

All that said, Iceland is volcanically a highly active country and the lull in activity we’ve seen throughout the history of aviation is not something we should be taking for granted. I wouldn’t panic, though. But nor would I believe everything I read in the media.