Innovation: Focusing on finding “The Next Big Thing” leads to performance pressure

This cross-posted from The Media Briefing, a new site in the UK for media professionals. ?I like the cut of their jib. They are not only creating content, but they are also adding value to their content using semantic technologies to make it easier for busy professionals content relevant to them.

You want innovation? You can’t handle innovation.

Seriously though, once they’re established, most companies are geared toward stability, not disrupting their own operations. Newspaper and magazine companies are no different.

And print media had no real impetus to change radically until recently. Newspapers and magazines took the challenge from television and radio in its stride – but it took the combined impact of multi-channel television, video games and the internet to challenge print media’s dominance. But if you thought the last five years were disruptive, brace yourself for the next five.

The change in media economics has been a shift from scarcity – with few sources of information and entertainment – to more content choices than the human brain can possibly process. In this super-saturated media market, it’s about to get even more crowded.

AOL and Yahoo have decided to focus their strategies on content, although Yahoo in particular has tried this before and failed. Even if AOL fails, its efforts will put additional pressure on print media. AOL launched a local news service, Patch, in the US: Warren Webster, Patch’s president, recently told Ken Doctor in the that he can match the content production of a like-sized newspaper for 4.1 percent of the cost. As Ken wrote:

“Patch can produce the same volume of content… for 1/25 the cost of the old Big Iron newspaper company, given its centralized technology and finance and zero investment in presses and local office space. (Staffers work out of their homes.)”

Demand Media is already operating in the UK, bringing its model of consistent work for freelancers at ridiculously low rates. They march to the beat of Google‘s drum, commissioning content based on popular search terms. The content may be easy to parody, but Demand is preparing for what many are predicting will be a US$1.5bn floatation on the market.

So how will you turn staid institutions into nimble players in the new media environment?

One strategy that won’t work is locking a bunch of smart people in a room to come up with The Next Big Thing. The Economist, as successful it is, tried to do that with Project Red Stripe. It didn’t work, leading to a kind of performance anxiety and creative paralysis.

The industry has spent a lot of time hiring innovation officers and investing innovation in a few positions. In the not-so-distant past, the people in these positions have had no budget, no staff, an ill-defined role and, therefore, little impact. Clay Shirky in his seminal blog post, Newspapers: Thinking the Unthinkable, said that these people saw what was happening and simply described it to their colleagues. Clay says:

When reality is labelled unthinkable, it creates a kind of sickness in an industry. Leadership becomes faith-based, while employees who have the temerity to suggest that what seems to be happening is in fact happening are herded into Innovation Departments, where they can be ignored en bloc.

Innovation is about creating a culture of constant of improvement. If you could do one thing that would save every single journalist in your organisation ten minutes on every story – it might not be sexy – but these cost savings are necessary to compete with someone who does what you do for a fraction of the cost.

Steve Yelvington, a digital content pioneer in the US, worked on the NewspaperNext Project, and he’s been working on digital projects long before most media execs even knew what a computer was.

The NewspaperNext project looked at disruptive innovationthrough the lens of Clayton Christensen, author of The Innovator’s Dilemma. The basic question was this: “How and why (did) simple, low-end, inadequate, ‘junk’ products and services so often topple the big guy?”

These insurgents do it by starting with a product that is “good enough” and then constantly improve it. Insurgents start out “beneath” the incumbents, but then move upmarket. Recent hires by the Huffington Post, Yahoo and The Daily Beast show how pure digital companies are now starting to lure top talent away from the once imperious names of US journalism.

Wracking your brains for the next Big Thing is not the answer. The rules of the media market have already changed and it’s time to listen to the people you once thought were barking mad. Your survival might just depend on it.

Newsrooms vs. the Volcano

Over in Geneva, the EBU Radio News Conference 2010 is underway, and I’m watching from afar via the wonders of Twitter.

Late yesterday, Michael Good of RTE talked about how they covered the Eyjafjallajökull eruption and, finding that the “public wanted more than radio programmes could give”, had to turn to the web and networked journalism to improve coverage. Charlie Beckett reports:

In the final session it was made clear by speakers such as Michael Good of RTE that mainstream media can’t cope with big complex crisis stories such as the volcanic ash story: ‘ the public wanted more than radio programmes could give’

RTE responded by using social media connected to their coverage to fill the gaps and to tell the micro as well as the macro story. To provide context as well as drama, information as well as narrative. As Michael put it, it showed how social media has to be at the heart of the newsroom.

Brett Spencer also reported that “SWR say if it happened again right now they would approach the science and the experts with more caution” and “Richard Clark of the BBC Newsroom says an awful lot of experts got airtime who actually didn’t know very much.”

As someone who followed Eyjafjallajökull’s progress from the beginning of the first ‘tourist eruption’ right the way through to the final gasps of the phreatomagmatic eruption (i.e. the big explosive bit), I can say with some certainty that the mainstream media did a pretty appalling job of choosing experts to talk about the eruption. Often, they chose to speak to industry representatives, such as union leaders or airline owners, who knew very little about the eruption itself but had very strong views on what they thought reality ought to be. They also had a vested interest in portraying the situation in a particular light.

I was particularly disgusted by people like Richard Branson, who threw a strop because he thought the flight ban was unnecessary. The BBC reported Branson being either disingenuous or dangerously ignorant:

Virgin Group chairman Sir Richard Branson meanwhile told the BBC that he believed governments would be unlikely to impose a blanket ban again.

“I think if they’d sent up planes immediately to see whether the ash was actually too dangerous to fly through or to look for corridors where it wasn’t very thick, I think that we would have been back flying a lot sooner,” he said.

This fundamentally misrepresents the monitoring that was going on at the time (planes were being sent up to look at the ash cloud) and, more importantly, fundamentally misunderstands the nature of ash clouds. They are not a uniform blanket of ash floating through the air, but a constantly changing area of high and low ash densities: Any ‘corridor’ there today probably wouldn’t be there tomorrow.

But in the scramble for experts, no one flubbed quite as badly as the Wall Street Journal and CNN, who both featured Robert “R.B.” Trombley, a self-styled volcanologist who turned out to be not quite the expert they had assumed.

Going back to #RNews10, Charlie Beckett said, “Yes the volcano exposed limits of MSM & value of social media bt it also exposed lack of data transparency from airlines, govt etc” to which Mike Mullane replied, “Beckett: Don’t beat yourselves up. There was failure on the part of governments and meteorologists to provide data for journalists”. And, in a related point, Andy Carvin Tweeted, “Don’t think anyone mentioned maps, though, whether newsroom generated, user-generated or both. Were there any?”

Mike and Charlie’s assertions are only true for the UK and the air travel industry: The airlines were, unsurprisingly, entirely opaque. The UK Met Office had some data, particularly on ash measurement and predictions, but could have done a much better job of communicating what they were doing and providing data. That’s a problem they seriously need to fix: They opened themselves up to undeserved criticism because no one had any idea what they were actually doing. The Civil Aviation Authority and the National Air Traffic Services should also be soundly criticised for appalling communications as well. Their online information and data was not well organised, to say the least.

But there was a huge amount of data coming out of other sources, particularly the Icelandic Met Office, which the mainstream media completely ignored. The IMO was providing near-live earthquake data for the Mýrdalsjökull area, which includes Eyjafjallajökull icecap, available as a map or a data table. And, as I discovered when I did this myself, if you sent them a nice email they would send you the raw data to play with. There is no reason why the media could not have contacted the IMO and used some of this data in visualisations for their coverage, like this one done by DataMarket.com:

There was quite a lot of ash forecast data coming out of various different institutes, primarily the UK Met Office. There were videos (search for Eyjafjallajökull) and photos taken by scientists, tourists, locals and the Icelandic news organisations (whose coverage was obviously much better). There were multiple live webcams and volcano enthusiasts captured and shared webcam timelapses showing the eruption and jökulhlaups (flash floods of ash and meltwater) on a daily basis. There was even a cut-out-and-keep model of the volcano, made by the British Geological Society.

And there was some flight data available, as exemplified by this fabulous timelapse of the European flights resuming after the ban:

The problem was that most news journalists, obviously, do not have the kind of specialist knowledge to be able to assess sources, experts, or data for an event that is so far outside of their usual field of experience. I understand that journalists can’t be experts in everything, but I do expect them to know how to find information, find sources, and to find data, and to do so reliably.

But they seemed oblivious to the online communities that were following this eruption closely and where there were people who could have helped them. I spent a lot of time on Erik Klemetti‘s wonderful blog, Eruptions (new site, old site). Erik, a vulcanologist at Denison University in Ohio, played host to a community of scientists and amateurs who discussed developments in detail and answered questions that people had about how all this volcano stuff really works.

I was on that blog almost every day, I don’t remember a single journalist ever asking in the comments for help in finding information or understanding its implications. I do remember, however, a lot of people popping in to get clarification on the misinformation promulgated by the media, particularly rumours that Eyjafjallajökull’s neighbour, Katla, was about to erupt.

The truth is that Eyjafjallajökull was probably the best observed, monitored and recorded eruption in history. The sheer volume of data produced was enormous. And the mainstream media ignored everthing but the pretty pictures.

A comment on comments

In July last year, I gave a lunchtime talk to the BBC World Service about the meaning of ‘social’ online, the problems that we face with commenting on news sites, and the way I thought we need to consider social functionality design in the news arena.

I opened with a couple of videos: The infamous Mitchell and Webb “What do you reckon?” sketch that has served both Kevin and I so well in our presentations, and a Sky News ident promoting their discussion forums.

My point was that, since the earliest days, news websites have seen interactive parts of their sites, like comments or forums, as a place for a damn good punch-up. And those who thought that they were providing a valuable place for feedback and discussion found that they had actually created toxic environments. I probably (although I don’t remember) mentioned Comment is Free as the archetypal pit of vipers. I usually do.

I went on to discuss the core concepts of social objects, relationships, trust and privacy, and had a stab at attacking one of the core misunderstandings the media has about community: Your audience is not a community.

After attempting to run through what these concepts mean, and how they affect social website design, I went on to emphasise why this is important. From my notes at the time:

Bad community reflects badly on your brand.

A community of fringe voices is alienating and unconstructive, and opens your brand up to ridicule.

I closed with the point that designing for social interaction is not just a matter of slapping comments on everything, but requires forethought and a deep understanding of the nature of ‘social’.

The first question was asked by Peter Horrocks, the Director of BBC World Service. He asked if I could give them examples of any news organisation had done it properly. I replied that, as far as I was aware, no news organisation had taken the necessary steps to create social functionality worthy of note.

The first parts of news sites to get comments were the early blogs, many of them run on Typepad or Movable Type, which was by far and away the best platform at the time. This was before WordPress and before specialist commenting systems, so dealing with spam and moderating comments could be arduous, but most blogs had niche audiences who tend to behave better, partly because they actually get to know one another.

Then other parts of the news organisations heard they siren call of the comment, and before you knew it, they were everywhere. You could leave a comment on almost every news story you stumbled upon, regardless of whether commenting was appropriate. Stories of murders and rapes and disasters asked you, “What do you reckon?”, and people reckoned away.

I have never seen any evidence that news organisations take the problem of community seriously enough. For them, the more comments a piece got, the more page views, the higher they can push their ad rates. So long as nothing was libellous, hey, go for it.

Kevin has said that most news orgs don’t have an engagement strategy, they have an enragement strategy. Community strategies have been focused more on how to keep moderation costs down whilst increasing comments, rather than going back to first principles and figuring out what comments are really for, understanding people’s behaviour in comment areas, and then designing a tool which helps facilitate positive behaviours and reduce the potency of negative ones.

In the half-decade since news organisations have discovered commenting, they have failed to fully understand it and to modify their systems appropriately.

Now Reuters has finally taken a step in the right direction by adding a rating system that awards points for good comments and then, eventually, allows the user to earn extra privileges (which they can also lose through bad behaviour). They have also added profile pages which aggregate comments and provides a count of how many have been accepted, removed or reported for abuse.

That is a good start, but it is just a start. It will be interesting to see what effect their basic rating system will have. Whenever one is rewarding a behaviour, one has to think about how that reward system can be gamed and what unintended consequences might result.

In this case, I can see how a user might put a lot of effort into building up a large stash of points through adding a lot of easy, unobjectionable content in order to get to a VIP user status which they can then abuse. Yes, they’ll be punished for that abuse but not until some of their abusive comments have been published straight to the web.

Why would someone go to all that trouble? On the web, no reason is required other than “Because I can”.

Reuters’ system may help slow down the toxicity of news site comments, but it isn’t the full Monty. It doesn’t address how people might come to form positive relationships via their site. It doesn’t consider how trust ? between readers (or readers and journalists) may develop or be eroded. It doesn’t think about the social objects around which people may want to interact (hint: the story is not the atomic unit of news). It doesn’t do anything to develop a true community.

On privacy, at least, it is neutral. Contrary to the position of one commenter on Baum’s blog post, if you post lots of stuff in public, having that stuff aggregated into one spot is not an invasion of your privacy and is not speech-chilling. If you are ashamed of what your comments collected say about you, perhaps you ought to think a bit more about what you say.

So, Reuters get a point for trying, but which news organisation is going to really grasp the nettle and do interaction properly?

Journalists’ identity as a barrier to tech adoption

As I mentioned last week, I’ll be speaking about the Future of Context at the Social Media Forum in Hamburg tomorrow. Bjoern Negelmann has been helping to frame the discussion ahead of the conference and, after our interview by email and blog, he’s posted a follow-up looking at possible evolution of Google’s Living Stories concept (in the original German and also in English via Google translate).

After outlining how he sees this working, Bjoern asks what’s standing in the way of the implementation of such a platform. As he points out, the technology exists. Why hasn’t anyone tried it? Part of the problem is that cash-strapped organisations aren’t prioritising this kind of work over other strategic goals. However, I also see other road blocks.

I responded:

You ask why such an approach hasn’t been implemented. The main reason is culture, and that is an issue not just for journalism but for many industries. New technologies often challenge not only existing roles but also existing organisational structures. That means that managers often assess new technology not in whether it delivers a better product or experience but whether it will undermine their authority.

Have you ever developed what you thought was an excellent social media strategy only to see it collapse due to lack of implementation by key managers? You can have the best technology and clear performance targets, and it still will fail without buy-in from key gatekeepers hidden within the organisation.

The other issue is really about professional identity. Journalists are very tribal, meaning that they have always been very sensitive about who is and isn’t a journalist. Economic uncertainty has only heightened this sensitivity. Many journalists still define themselves not only by their jobs but by very specific ways in which they do their jobs. Case in point, in the UK, a proper journalist must know shorthand or they aren’t a proper journalist. In the US, where I’m from, shorthand isn’t a requirement for journalism training. Although I can type faster than most people can do shorthand, I’m not really a proper journalist because I don’t know shorthand. It’s not difficult to implement technology in journalism organisations that doesn’t affect journalists’ roles, but it is devilishly difficult to implement technology that impacts how they do their jobs because it challenges their identity.

Discuss.

Social Media Forum: My thoughts on the future of context

Next week, I’ll be giving the keynote at the Social Media in Hamburg, and I’ve been asked to speak about the future of context. Bjoern Negelmann asked me a few questions via email about the subject, and he’s kindly allowed me to cross-post the interview for the Social Web World blog.

1) Kevin, as an expert for new digital media strategies you will be giving a talk on the “future of context” at the upcoming Social Media FORUM on Sept 28. Can you give three keywords that describe what we can expect from your talk?

Relevance, insight, value

2) Is “context” the turning key for the misled strategies of media companies in the Internet? And if so what is the explanation?

First I should say, as much as everyone in the industry wishes it, there are no silver bullets, no single solution that will solve the problems that media companies are facing. The iPad won’t save us. Paywalls won’t save us, and simply finding ways to increase context won’t on its own save us.

That being said, most current digital media strategies are fundamentally flawed. They are mostly based on the premise that internet really is just another distribution medium like radio, television and print. They rely on a media landscape of scarcity instead of abundance. These outdated assumptions are rooted in the era of mass media. In 20th Century mass media models, which relied on just a few sources of information and entertainment, success relied upon building the biggest audience possible and using paid content and advertising to make loads of money.

As Edward Roussel of the Telegraph, said, the link between rising audience and higher returns was true until the spring of 2008. Then something happened. Yes, it was partly due to the recession, but it is also due to an oversupply of online advertising space. As Paid Content says, premium and mid-tier publishers are creating too much content, creating a surplus of content to run ads against. As in any market, if supply outstrips demand then you have downward price pressures.

There are exceptions. With the online advertising recovery, The Daily Mail in the UK has been able to outgrow the competition and translate that into commercial success. Big still sometimes wins. There are still lucrative verticals such as business in which returns have stood up or actually grown during the recession. The Wall Street Journal, The Economist and The Financial Times are all enjoying success, partly due to increasing interest in business and finance due to the recession. However, most other publishers find themselves under severe pressure.

To change our fortunes, we first need to question the assumptions underlying 20th Century media business models. Until the 1980s, both audiences and advertisers had fewer choices and media owners could charge monopoly rents for advertising. But when the multi-channel world, whether broadcast or online, arrived, the media’s first reaction was to create more channels and content to try to take advantage of increased distribution opportunities. We’re now seeing the limits of such an approach as the law of diminishing returns takes hold.

Context is about adding value to content in ways that benefit audiences and advertisers. It makes it easier for audiences to find and make sense of relevant content. Adding context, rather than simply creating more content, is about realising that content is no longer scarce, but audiences’ time and attention is. It helps advertisers by providing opportunities for more highly targeted advertising.

3) But this strategy means allocating resources for producing context? Isn’t this against the recent strategies of media companies that are just cutting costs because of the “lousy pennies” of online advertising?

While media companies, especially newspapers, have been cutting staff to cut costs, they have also been creating more content. Digital production techniques make this possible but, again, we’re starting to reach the limits of that strategy. Basically, we have an oversupply of content driving an oversupply of digital advertising space, and traditional markets have one way of valuing a surplus: returns plummet.

The market is already flooded and the last thing we need is more content. A study commissioned by the Associated Press  (PDF) found that young audiences were shutting off because they were lost in a deluge of episodic updates. The key conclusion was: “The subjects were overloaded with facts and updates and were having trouble moving more deeply into the background and resolution of news stories.” In essence, the news industry is acting against its own economic interest by producing more content and exacerbating the problem of information overload. It’s like trying to save a drowning man by giving him a glass of water

We need a much more focused approach. Allocating resources to producing context around existing content while making strategic choices about what not to produce will create opportunities by adding value and creating differentiated products. Yes, we live in a world of flow, with constant streaming updates, but mining that flow for context and value-added information will be where sustainable business models are.

4) So putting the weight on the “context” – what are the formats and examples of this strategy?

Thomson-Reuters has a service called Calais. It analyses thousands of mainstream media and non-traditional sources of information every day. It powers services such as Zemanta, which allows bloggers and traditional journalists to  easily add images and links, which add context, to articles. As a platform, Thomson-Reuters can sell Calais to enterprises to make sense of the data and information they create, but it’s also a tool the company itself uses to algorithmically find meaning in the flow of information from traditional and non-traditional news organisations, e.g. finding new companies to watch before they show up on the traditional news radar.

One of my favourite examples right now is Sunlight Foundation’s Poligraft. Using public information about political contributions and a service like Calais, they reveal details about donors and major campaign contributions to members of Congress. It quickly adds a layer of context in any story involving political leaders.

The Guardian is achieving some great things with their Datablog and Datastore. Data is a key part of many stories that journalists write everyday, but in the past, the only thing we with did with those numbers was highlight a few. Now, the Datablog not only allows everyone to see the full set of numbers, but by hosting them on Google Docs for others to download, people with skills in data visualisation are able to present these numbers in new and creative ways. The Guardian has a group on Flickr to allow them to highlight their work.

The BBC also had another great example during the World Cup this year. They called it dynamic semantic publishing, and it took the official FIFA statistics to dynamically create a rich store of information about players, teams and groups. Not only was it a rich presentation of the facts around the World Cup, but it also helped their audience discover BBC coverage of their favourite teams and players.

5) If you take a look ahead in the future – what kind of media companies are able to adapt to that strategy?

The kind of companies that have been able to adapt to this strategy have been ones that see beyond traditional containers of content. For news, they realise that the written story is no longer the atomic unit, the indivisible unit, of journalism. There is data and context within the story, context that can be linked and used to draw connections between seemingly unrelated events in our increasingly complex world. Context is not just about adding value to pieces of content, but it also helps make it easier to organise and add news ways for audiences to find and discover what is relevant and interesting to them.

Real-time search: The web at the speed of life

This is the presentation that I gave this week at the Nordic Supersearch 2010 conference in Oslo organised by the Norwegian Institute of Journalism. To help explain the presentation, I was looking at the crush of information that people are dealing with, the 5 exabytes of information that Eric Schmidt of Google says that we’re creating every two days.

I think search-based filters such as Google Realtime are only part of the answer. Many of the first generation real-time search engines help filter the firehouse of updates being pumped into Facebook and Twitter, but it’s often difficult to understand the provenance of the information that you’re looking at. More interestingly, I think we are now seeing new and better ways ways to filter for relevant information beyond the search box. Search has been the way for people to find information that is interesting and relevant, but I think real-time activity is providing new ways to deliver richer relevance.

I also agree with Mahendra Palsule that we’re moving from a numbers game to the challenge of delivering relevant information to audiences. In a lot of ways, simply driving traffic to a news site is not working. Often, as traffic increases, loyalty metrics decrease. Bounce rates go up. (Bounce rates are the percentage of visitors who spend less than 5 seconds on your site.) Time on site goes down. The number of single-page per visit visitors increase. It doesn’t have to be that way, but it is too often the case. For news organisations and other content producers, we need to find ways to increase loyalty and real engagement with our content and our journalists. I believe more social media can increase engagement, and I also believe that finding better ways to deliver relevant content to audiences is also key.

Google’s method of delivering relevance in the past was to determining the authority of content on the web by looking at the links to that content, but now we’re seeing other ways to filter for relevance. When you look how services such as paper.li filter content, we’re actually tapping into the collective attention of either our social networks or networks of influence in the case of lists of influential Twitter users. In addition to attention, we’re also starting to see location-based networks filter based on not only what is happening in real-time but also what we’re doing in real-space. We can deliver targeted advertising based on location, and for news organisations, there are huge opportunities to deliver highly targeted content.

Lastly, I think we’re finding new ways to capture mass activity by means of visualisation. Never before have we been able to tell a story in real-time as we can now. I gave the examples of the New York Times Twitter visualisation during the Super Bowl and also the UK Snow map.

I really do believe that with more content choices than the human brain can possibly cope with, intelligent filters delivering relevant information and services to people will be a huge opportunity. I think it’s one of the biggest challenges in terms of news organisations that in the battle for attention, we have to constantly be focused on relevance or become irrelevant. Certainly, any editor worth his or her salt knows (or thinks he or she knows) what his audience wants, but there are technology companies that are developing services that can help deliver a highly specialised stream of relevant information to people. As with so many issues in the 21st Century, it won’t be technology or editorial strategies alone that will deliver relevance or sustainable businesses for news organisations, it will the effective use of both.

 

Skills for journalists: Learning the art of the possible

I’m often asked at conferences and by journalism educators what skills journalists need to work effectively in a digital environment. Journalism educator Mindy McAdams has started a nice list of some of these skills in a recent blog post. A lot of journalists (and journalism educators) scratch their heads over what seem an ever-expanding list of skills they need to do digital. It feels like inexorable mission creep.

I can empathise. One of the most difficult parts of my digital journalism career, which began in 1996, has been deciding what to learn and, also, what not learn but delegate to a skilled colleague. I’m always up for learning new things, but there is a limit. Bottom line: It’s not easy. In the mid-90s, I had to know how to build websites by hand, but then automation and content management systems made most of those skills redundant. It was more important to know the possibilities, and limits, of HTML. When I worked for the BBC, I picked up a lot of multimedia skills including audio recording and editing, video recording and basic video editing, and even on-air skills. I also was able to experiment with multimedia digital story-telling. However, with the rise of blogs and social media, suddenly the focus was less on multimedia and more on interaction. All those skills come in handy, but the main lesson in digital media is that it’s a constant journey of education and re-invention.

What do I mean about choosing what not to learn? In the mid-90s, I was faced with a choice. I could have learned programming and become more technical, or I could focus on editorial and work with a coder. I did learn a bit of PERL to run basic scripts for a very early MySociety-esque project about legislators in the state where we worked, but after that, I handed most of the work off to a crack PERL developer on staff. I knew what I wanted to do, and he could do it in a quarter of the time.

I knew that my passion was telling stories in new ways online and, whilst I didn’t learn to programme, I did pick up some basic understanding of what was possible: Computers can filter text and data very effectively. They can automate repetitive tasks, and even back in the late 1990s, the web could present information, often complex sets of data, in exciting ways. I realised that it was more important for me to know the art of the possible rather than learn precisely how to do it. My mindset is open to learning and my skillset is constantly expanding, but to be effective, I have to make choices.

One thing that we’re sorely lacking as an industry are digitally-minded editors who understand how to fully exploit the possibilities created by the internet, mobile and new digital platforms. Print journalists know exactly what they want within the constraints of the printed page, which often in presentation terms is much more flexible than a web page. However, they bring that focus on presentation to digital projects. They think of presentation over functionality, largely because they don’t know what’s possible in digital terms. As more print editors move into integrated roles, they will have to learn these skills. They will eventually but, by and large, they’re not there yet. Note to newly minted Integrated editors: There are folks who have been doing digital for a long time now. The internet was created long before integration. We love to collaborate, but we do appreciate a little R-E-S-P-E-C-T.

In terms of learning the art of the possible, my former colleague at The Guardian, Simon Willison, has summed this up really well during a recent panel discussion:

I kind of think it’s the difference between geeks and the general population. It’s understanding when a problem is solvable. And it’s like the most important thing about computer literacy they should be teaching in schools isn’t how to use Microsoft Word and Excel. It is how to spot a problem that could be solved by a computer and then find someone who can solve it for you.

To translate that into journalism terms, it’s about knowing how to tell stories in audio, text, video and interactive visualisations. It’s about knowing when interactivity will add or distract from a story. It’s an understanding that not every story need to be told the same way. It’s about understanding that you have many more tools in your kit, but that’s it’s foolish to try to hammer a nail with a wrench. It’s not about building a team where everyone is a jack of all trades, but building a team that gives you the flexibility to exploit the full power of digital storytelling.

Silly season’s here again

Earlier in the week, Channel 4’s Samira Ahmed sent these messages to Twitter:

SamiraAhmedC4: MATHS HELP! Do I need to say “comma” if I read out this formula tonight: p(h,r)=u(h,r)-pr=g(h, Zr)+f1[h, m(o,r)]+f2[h, m(o,r)]+E-pr.
Aug 16, 2010 03:56 PM GMT

SamiraAhmedC4: It’s the formula to explain how Blackpool (like Bath before it) is becoming classier.
Aug 16, 2010 03:57 PM GMT

If you’ve spent any time at all watching the debunking of bad science coverage, you’ll be wincing, because that formula has all the signs of being total tosh. August is silly season, the time of year where PR companies know they can trot out any old rubbish and it’ll make headline news because nothing else is going on. It’s a tried and tested method.

Ben Goldacre spends quite a bit of time debunking not just silly season stories but also flaws in the media coverage of health and medicine stories that could have serious public health repercussions. It was entirely unsurprising that he should see Samira’s tweets and dismiss them out of hand, given the PR industry’s history of producing bunkum formulae to promote their own brands.

Ben said:

BenGoldacre: .@SamiraAhmedC4 no, you just have to say “by reading this out, i have lost all respect for myself as a journalist”

Ben is followed by a lot of people who hold similar viewpoints to his and a pile-on ensued, with quite a few people being unpleasant to Samira.

Update 13:06: Gordon Rae has found the original press release from Nottingham University Business School.

The story seems to have originated from the the PA, who’d done a very shoddy job in covering it:

Resort’s winning formula hailed
Academics have claimed new Premiership heroes Blackpool as living proof of a formula predicting the resurgence of the fading Lancashire resort.

The equation is based on how different social classes interact to make or break a holiday resort.

Nottingham University Business School used the rise, fall and renaissance of Bath since its 18th-century heyday as the original basis for the theory. But now they claim Blackpool’s return to top-flight football shows the formula applies.

“Academics have claimed” is a classic fudge which often really means “We got sent a press release and can’t be bothered to actually find out any more about the story so we’re just going to make it fuzzy round the edges and hope no one notices”. It’s no wonder that people thought it was nonsense. It had all the signs.

It turns out that the story is actually based on a published paper:

The rise, fall and renaissance of the resort: a simple economic model
Author: Swann, G.M. Peter
Source: Tourism Economics, Volume 16, Number 1, March 2010, pp. 45-62(18)
Publisher: IP Publishing Ltd

When he found out, Ben apologised both on Twitter and on his own Posterous.

BenGoldacre: .@samiraahmedc4 humblest apologies, all the outward signs of bullshit were there, and was impossible to tell from PA report. sorry!

Many of his followers who had been rude to Samira also apologised to her.

Now, normally, this little spat wouldn’t be worth blogging about. A disagreement between people on Twitter that resolves amicably is barely worth a second thought. It happens all the time.

But the idea that, after the friendly apology, it was all water under the bridge is a little undermined by Samira’s article in today’s Independent about it, which in my opinion not only sports a lot of unnecessary ad hom attacks, but also fails to draw the most important conclusions from this storm in a teacup.

The title, Samira Ahmed: Targeted by the ruthless Twittermob, sets a poor tone from the off. I’ve had a look through the Tweets and “ruthless Twittermob” it was not. Snarky, rude, inconsiderate and thoughtless group, yes. But ruthless mob?

Samira begins by explaining that she is new to Twitter and had got some advice from “old Twitter hands”:

1. Twitter works best as a two way networking tool – asking as much as telling. And 2. Scientists, and the writer Ben Goldacre in particular, can get a bit aggressive on it.

The first piece of advice is good. The second is both a sweeping generalisation in regards to scientists and an ad hom towards Ben.

I flagged this second sentence up on Twitter, and Samira told me that it had been added by the sub and that she was unhappy about it, so we’ll have to take the entire piece as an amalgam of Samira’s own writing and the Indy’s sub’s writing, as we have no way of telling them apart.

Update: Whist writing this, this sentence has been updated to: “2. The science writer Ben Goldacre can get a bit aggressive on it.”

But getting the first, now even sharper, ad hom against Ben in before the end of the first paragraph makes me wonder what the point of this piece is. If all is forgiven and everyone has apologised, why go to a national newspaper to drag everything over the coals again? Was this piece written to examine the phenomenon of herd-like behaviour online and the psychology that might explain it? Or to have a stab at Ben and by association, his newspaper, The Guardian?

The second para takes another swipe at Ben, about how he got “his science facts wrong and launch[ed] a personal attack on my journalistic integrity.” Ben commented before checking the facts, and then apologised when he realised the formula was real. He shouldn’t have jumped to conclusions like that, but I do feel Samira’s overstating her case a bit. Here are his tweets in order, so you can make up your own mind:

.@SamiraAhmedC4 no, you just have to say “by reading this out, i have lost all respect for myself as a journalist”     6:07 PM Aug 16th

any nerd bloggers who want to pre-mock C4 News, looks like theyre covering this bullshit http://bit.ly/cmBfBx http://bit.ly/98aJKC    6:12 PM Aug 16th

.@SamiraAhmedC4 i’ve written a lot on this kind of lame non-journalism, some of it in this category here: http://bit.ly/dBGWLX     Mon 16 Aug 17:17:53 2010

.@SamiraAhmedC4 youre the one in a position to judge, all i can see is an “equation” with no terms defined. put press release online for us?     Mon 16 Aug 17:40:22 2010

.@alexbellos @samiraahmedC4 haha no, wait, for the first time in media history, this is actually a real formula! http://bit.ly/9xolOM     Mon 16 Aug 17:46:54 2010

Reading on in the Indy article we find a yet another ad hom:

We all enjoy self-styled sheriffs like Goldacre roaming the web setting their posses on quack doctors. But journalists like me, who work for major news broadcasters, operate under a code of conduct broadly similar to our television content.

I’m not sure where to start on that one, other than if Ben’s comment was an attack on Samira’s journalistic integrity, that doesn’t make it ok for her to attack his.

The lessons Samira draws are that reputation is important, Twitter isn’t about broadcasting, and that it’s a good tool for finding new voices. That’s all fair. But she misses some other key lessons from her particular experience:

1. Understand the culture of the community you are entering
This is the first thing that I tell all my clients who want to use social media. It’s 20% tools, 80% people and if you don’t understand how people relate to each other in the context of the community you are trying to be a part of, you will make a mistake. Samira’s mistake was in not understanding that posting a formula and asking how to pronounce without providing either context or source could be misinterpreted.

The lesson from this should have been that if you ask for help with something scientific, provide a link to your source material first. That source material should be the academic paper if you have one or the press release if you don’t. If you’re writing a story based on a press release, be really, really careful what you say.

2. Twitter escalates the bad and the good, irrespective
Twitter does great things, spreading the word about important issues or worthy causes. But Twitter is made up of humans, and humans sometimes get things wrong. In those cases, bad words will spread just as far, just as fast. This is unfortunate, but it is pretty predictable.

The lesson here would be that when something goes bad, try to understand what happened and why, and then nip it in the bud as fast as possible. Samira failed to provide context and without context the formula looked like nonsense. Rather than asking Ben to DM or email, it might have been more effective for Samira to hunt down the original paper (which she should have had to hand anyway) and post the link to that on Twitter. Although Samira mentioned “Nott U biz school” on Twitter, it seems she didn’t link to the paper itself.

3. Everyone makes mistakes
On Twitter especially. Everyone, from @StephenFry on down, at some point Tweets something that they later regret. From public messages that should have been DMs to snarky comments that one later regrets, pretty much everyone says something daft on Twitter eventually.

This lesson’s easy. With great power comes great responsibility. Ben should understand that with 57,004 followers, he has great power. I understand his temptation to snark first and ask questions later, but some pre-snark research may well have changed his mind about what to Tweet and saved everyone some hassle.

Neither Ben nor Samira have covered themselves in glory here. And normally, I wouldn’t even bother covering this spat, if that had been all it was. But I get a little cross about these sorts of ‘Twittermob’ stories, because they remind me of the old school “The internet is full of axe-wielding murderers” stories that used to get published so often a decade ago (and still do by the red tops). That’s just wrong. They show a distinct lack of understanding of Twitter and social media in general and extrapolate too far from personal experience, emphasising the bad and generally ignoring that it’s well outweighed by the good. That has the potential to dissuade people from taking part in what can actually be a vibrant, supporting, intelligent, friendly place. And we’re all the poorer for that.

Update 22:40: Samira has emailed me to ask if I would post a full set of her own Tweets, which I am happy to do. I had to take the timestamps out from what Samira sent me as unfortunately they had gotten all mangled, but they are in reverse chronological order, and there are more after the jump.

@katebevan to quote my favourite fictional science officer the whole experience has been….. fascinating.

@BlakeCreedon Thankyou so much.

@Schroedinger99 Thankyou for the apology.

And all this before the story’s even aired. Latecomers start here: http://bit.ly/9EZtyU

Continue reading

Two projects to watch: Ben Franklin Project and TBD.com

TBD.com's Near You zip code news filter

TBD.com's Near You zip code-based news filter

At 428 am in Washington DC a new news site, TBD.com, launched, and it is definitely one worth watching. Why? They have assembled an all-star staff, brimming with passion. The general manager for the project is Jim Brady, the former executive editor and vice president of Washington Post Newsweek interactive. Steve Buttry, the site’s head of community engagement, has a long history in traditional journalism, training and innovation.  (For any journalist struggling to come to terms with the unrequited love you feel for the business, read this post by Mimi Johnson, Steve’s wife, as he left the newspaper business to go all digital at TBD.) They have some great staff who I have ‘met’ via Twitter including networked journalists Daniel Victor and Jeff Sonderman.

When he was hired, Jeff described his job as a community host as this:

developing ways to work with bloggers and users to generate, share and discuss content.

He described TBD.com as this:

Our goal is to build an online news site for the DC metro area, and do it taking full advantage of the how the web works — with partnership not competition, users not readers, conversation not dictation, linking not duplicating.

If you look on Twitter this morning, Jeff and Steve are very busy on their first full day as hosts for the new news service.

Digitally native at launch

The site is clean and clear, easy to navigate with a lot of excellent touches. TBD.com launched with an Android app and are awaiting approval for their iPhone application. They zip (post) code news filter to find out content not only from TBD but also from bloggers in the area is excellent. I lived in Washington from 1998 until 2005 as the Washington correspondent of BBCNews.com. I know the city well. I typed in my old home zip code, 20010, and got news about Mount Pleasant including from a blog called The 42 Bus, which was the bus that I used to take to work everyday. Their live traffic information is template for how city sites should add value for such bread and butter news. You can quickly pull up a map showing traffic choke points in the area. They even have a tool to plot your best travel route. The traffic tools are pulled from existing services, but the value is in the package.

They had a launch event last week, and they explained their networked journalism strategy. Steve Myers at the Poynter journalism institute said half of the links at TBD.com would point to external sources, much higher than at most sites. said that At launch, 127 local bloggers had joined their network. Steve Myers had this quote from Steve Buttry about their linking strategy:

“If we’re competing on the same story, we’ll do our story and we’ll link to yours,” said Steve Buttry, director of community engagement for the site. If another source owns a big story, “we’ll play you at the top of the home page and we’ll cover something else with our staff resources.”

Wow. Personally, I think that this is smart. With resources declining at most news organisations, they have to be much more strategic about how they use their staff. They need to focus on what value that they add. Jeff Jarvis says: “Cover what you do best and link to the rest“, and this is one of the highest profile tests of that strategy.

Ken Doctor, brilliant news industry analyst at Newsonomics, has 10 reasons to watch TBD.com. Harvard’s Nieman Lab for journalism has another six reasons why they are watching the launch. Of Ken’s list, I’ll highlight two. Bucking the trend for many new high-profile news projects in the US, this is a for-profit business. Ken’s seventh point is huge:

7) It’s got a big established sales force to get it going. Both TV stations salespeople with accounts — and relationships. So TBD is an extension of that sales activity, not a start-up ad sell, which bedevils many other  start-ups.

The other thing that TBD.com has going for it is that it has the commitment of someone who already has seen some success with new models, Robert Allbritton. A few years ago, he launched Politico.com, bringing in two high profile veterans from the Washington Post to compete not only with their newspaper but also specialist political outlets like Roll Call. Politico has managed to create a successful print-web product, “not profitable every quarter but says it’s turning a profit for any given six months,” Allbritton told paidContent.org. What is more important though is his commitment to his ventures. He’s got the money and commitment to support projects past the short term.

“The first year of Politico was pretty ugly in terms of revenue,” he admitted. “You’ve got to have some staying power for these things to work.”

The Ben Franklin Project

The other project that I’m watching is John Paton’s Ben Franklin Project at the Journal Register Company. What is it?

The Journal Register Company’s Ben Franklin Project is an opportunity to re-imagine the newsgathering process with the focus on Digital First and Print Last. Using only free tools found on the Internet, the project will – from assigning to editing- create, publish and distribute news content on both the web and in print.

Succinctly, this company is looking to disrupt its own business. Instead of attacking costs by cutting more staff, they are looking to cut costs by eliminating the price of their own production using free tools. It’s not something that every organisation could do, but with 18 daily newspapers and 150 non-daily local publications, it shows the ambition of their project. This is not a tiny organisation.

In practice, the organisation set the goal for all 18 of its newspapers to publish online and in print using free online and free open-source tools, such as the Scribus desktop publishing application. They are also pursuing the same kind of community engagement, networked journalism strategy that is at the heart of TBD.com.

On 4 July, 2010, Independence Day in the US, they published their 18 daily newspapers and websites only using free tools and crowdsourced journalism. Jon Cooper, Vice President of Content, Journal Register Company wrote:

Today — July 4, 2010 — marks not only Journal Register Company’s independence from the costly proprietary systems that have long restricted newspapers and news companies alike. Today also marks the start of a revolution. Today marks the beginning of a new path for media companies whose employees are willing to shape their own future.

This is just part of Paton’s turnaround strategy for the Journal Register Company. However, in 2010, which is proving to be another tough year for the US economy (especially in some of the areas the company covers), Paton just announced that the company is 15% ahead of its revenue goals. He said:

Our goal is to pay out an extra week’s pay this year to all employees for hitting our annual target of $40 Million.

That is an amazing investment in journalists and an incentive for them to embrace the disruptive change he is advocating, but it’s so heartening to see journalists engaged and benefitting from change in the industry.

With all the talk about innovation in journalism, it is rare to see projects launch with such clear ambitions. After a lot of talk in the industry, we’ll now see what is possible.

APIs helping journalism “scale up”

A couple of days ago, I quoted AOL CEO Tim Armstrong on developing tools to help journalists “scale up” what they do. ?In a post on Poynter’s E-Media Tidbits, Megan Garber has a highlighted a good practical example of what I meant .

One thing that computers and other technology can help journalists to work more efficiently is to cut down or eliminate frequent, repetitive tasks. Derek Willis at the New York Times talks about APIs (as Derek describes APIs as “just a Web application delivering data). Derek says:

The flexibility and convenience that the APIs  provide make it easier to cut down on repetitive manual work and bring new ideas to fruition. Other news organizations can do the same.

Derek also points how savvy use of data is not just good for data visualisations and infographics, but it is also an excellent resource for New York Times’ journalists.

So if you have a big local election coming up, having an API for candidate summary data makes it easier to do a quick-and-dirty internal site for reporters and editors to browse, but also gives graphics folks a way to pull in the latest data without having to ask for a spreadsheet.

And as he said, the biggest consumer of New York Times APIs is the New York Times itself.

Projects such as building an API can be quite large (although new companies and also organisations like the Sunlight Foundation in the US and MySociety in the UK have great public service APIs and data projects), but with the benefits to both audiences, designers, developers and journalists, it makes it easier to justify the time and effort.