Real-time search: The web at the speed of life

This is the presentation that I gave this week at the Nordic Supersearch 2010 conference in Oslo organised by the Norwegian Institute of Journalism. To help explain the presentation, I was looking at the crush of information that people are dealing with, the 5 exabytes of information that Eric Schmidt of Google says that we’re creating every two days.

I think search-based filters such as Google Realtime are only part of the answer. Many of the first generation real-time search engines help filter the firehouse of updates being pumped into Facebook and Twitter, but it’s often difficult to understand the provenance of the information that you’re looking at. More interestingly, I think we are now seeing new and better ways ways to filter for relevant information beyond the search box. Search has been the way for people to find information that is interesting and relevant, but I think real-time activity is providing new ways to deliver richer relevance.

I also agree with Mahendra Palsule that we’re moving from a numbers game to the challenge of delivering relevant information to audiences. In a lot of ways, simply driving traffic to a news site is not working. Often, as traffic increases, loyalty metrics decrease. Bounce rates go up. (Bounce rates are the percentage of visitors who spend less than 5 seconds on your site.) Time on site goes down. The number of single-page per visit visitors increase. It doesn’t have to be that way, but it is too often the case. For news organisations and other content producers, we need to find ways to increase loyalty and real engagement with our content and our journalists. I believe more social media can increase engagement, and I also believe that finding better ways to deliver relevant content to audiences is also key.

Google’s method of delivering relevance in the past was to determining the authority of content on the web by looking at the links to that content, but now we’re seeing other ways to filter for relevance. When you look how services such as paper.li filter content, we’re actually tapping into the collective attention of either our social networks or networks of influence in the case of lists of influential Twitter users. In addition to attention, we’re also starting to see location-based networks filter based on not only what is happening in real-time but also what we’re doing in real-space. We can deliver targeted advertising based on location, and for news organisations, there are huge opportunities to deliver highly targeted content.

Lastly, I think we’re finding new ways to capture mass activity by means of visualisation. Never before have we been able to tell a story in real-time as we can now. I gave the examples of the New York Times Twitter visualisation during the Super Bowl and also the UK Snow map.

I really do believe that with more content choices than the human brain can possibly cope with, intelligent filters delivering relevant information and services to people will be a huge opportunity. I think it’s one of the biggest challenges in terms of news organisations that in the battle for attention, we have to constantly be focused on relevance or become irrelevant. Certainly, any editor worth his or her salt knows (or thinks he or she knows) what his audience wants, but there are technology companies that are developing services that can help deliver a highly specialised stream of relevant information to people. As with so many issues in the 21st Century, it won’t be technology or editorial strategies alone that will deliver relevance or sustainable businesses for news organisations, it will the effective use of both.

 

Skills for journalists: Learning the art of the possible

I’m often asked at conferences and by journalism educators what skills journalists need to work effectively in a digital environment. Journalism educator Mindy McAdams has started a nice list of some of these skills in a recent blog post. A lot of journalists (and journalism educators) scratch their heads over what seem an ever-expanding list of skills they need to do digital. It feels like inexorable mission creep.

I can empathise. One of the most difficult parts of my digital journalism career, which began in 1996, has been deciding what to learn and, also, what not learn but delegate to a skilled colleague. I’m always up for learning new things, but there is a limit. Bottom line: It’s not easy. In the mid-90s, I had to know how to build websites by hand, but then automation and content management systems made most of those skills redundant. It was more important to know the possibilities, and limits, of HTML. When I worked for the BBC, I picked up a lot of multimedia skills including audio recording and editing, video recording and basic video editing, and even on-air skills. I also was able to experiment with multimedia digital story-telling. However, with the rise of blogs and social media, suddenly the focus was less on multimedia and more on interaction. All those skills come in handy, but the main lesson in digital media is that it’s a constant journey of education and re-invention.

What do I mean about choosing what not to learn? In the mid-90s, I was faced with a choice. I could have learned programming and become more technical, or I could focus on editorial and work with a coder. I did learn a bit of PERL to run basic scripts for a very early MySociety-esque project about legislators in the state where we worked, but after that, I handed most of the work off to a crack PERL developer on staff. I knew what I wanted to do, and he could do it in a quarter of the time.

I knew that my passion was telling stories in new ways online and, whilst I didn’t learn to programme, I did pick up some basic understanding of what was possible: Computers can filter text and data very effectively. They can automate repetitive tasks, and even back in the late 1990s, the web could present information, often complex sets of data, in exciting ways. I realised that it was more important for me to know the art of the possible rather than learn precisely how to do it. My mindset is open to learning and my skillset is constantly expanding, but to be effective, I have to make choices.

One thing that we’re sorely lacking as an industry are digitally-minded editors who understand how to fully exploit the possibilities created by the internet, mobile and new digital platforms. Print journalists know exactly what they want within the constraints of the printed page, which often in presentation terms is much more flexible than a web page. However, they bring that focus on presentation to digital projects. They think of presentation over functionality, largely because they don’t know what’s possible in digital terms. As more print editors move into integrated roles, they will have to learn these skills. They will eventually but, by and large, they’re not there yet. Note to newly minted Integrated editors: There are folks who have been doing digital for a long time now. The internet was created long before integration. We love to collaborate, but we do appreciate a little R-E-S-P-E-C-T.

In terms of learning the art of the possible, my former colleague at The Guardian, Simon Willison, has summed this up really well during a recent panel discussion:

I kind of think it’s the difference between geeks and the general population. It’s understanding when a problem is solvable. And it’s like the most important thing about computer literacy they should be teaching in schools isn’t how to use Microsoft Word and Excel. It is how to spot a problem that could be solved by a computer and then find someone who can solve it for you.

To translate that into journalism terms, it’s about knowing how to tell stories in audio, text, video and interactive visualisations. It’s about knowing when interactivity will add or distract from a story. It’s an understanding that not every story need to be told the same way. It’s about understanding that you have many more tools in your kit, but that’s it’s foolish to try to hammer a nail with a wrench. It’s not about building a team where everyone is a jack of all trades, but building a team that gives you the flexibility to exploit the full power of digital storytelling.

Two projects to watch: Ben Franklin Project and TBD.com

TBD.com's Near You zip code news filter

TBD.com's Near You zip code-based news filter

At 428 am in Washington DC a new news site, TBD.com, launched, and it is definitely one worth watching. Why? They have assembled an all-star staff, brimming with passion. The general manager for the project is Jim Brady, the former executive editor and vice president of Washington Post Newsweek interactive. Steve Buttry, the site’s head of community engagement, has a long history in traditional journalism, training and innovation.  (For any journalist struggling to come to terms with the unrequited love you feel for the business, read this post by Mimi Johnson, Steve’s wife, as he left the newspaper business to go all digital at TBD.) They have some great staff who I have ‘met’ via Twitter including networked journalists Daniel Victor and Jeff Sonderman.

When he was hired, Jeff described his job as a community host as this:

developing ways to work with bloggers and users to generate, share and discuss content.

He described TBD.com as this:

Our goal is to build an online news site for the DC metro area, and do it taking full advantage of the how the web works — with partnership not competition, users not readers, conversation not dictation, linking not duplicating.

If you look on Twitter this morning, Jeff and Steve are very busy on their first full day as hosts for the new news service.

Digitally native at launch

The site is clean and clear, easy to navigate with a lot of excellent touches. TBD.com launched with an Android app and are awaiting approval for their iPhone application. They zip (post) code news filter to find out content not only from TBD but also from bloggers in the area is excellent. I lived in Washington from 1998 until 2005 as the Washington correspondent of BBCNews.com. I know the city well. I typed in my old home zip code, 20010, and got news about Mount Pleasant including from a blog called The 42 Bus, which was the bus that I used to take to work everyday. Their live traffic information is template for how city sites should add value for such bread and butter news. You can quickly pull up a map showing traffic choke points in the area. They even have a tool to plot your best travel route. The traffic tools are pulled from existing services, but the value is in the package.

They had a launch event last week, and they explained their networked journalism strategy. Steve Myers at the Poynter journalism institute said half of the links at TBD.com would point to external sources, much higher than at most sites. said that At launch, 127 local bloggers had joined their network. Steve Myers had this quote from Steve Buttry about their linking strategy:

“If we’re competing on the same story, we’ll do our story and we’ll link to yours,” said Steve Buttry, director of community engagement for the site. If another source owns a big story, “we’ll play you at the top of the home page and we’ll cover something else with our staff resources.”

Wow. Personally, I think that this is smart. With resources declining at most news organisations, they have to be much more strategic about how they use their staff. They need to focus on what value that they add. Jeff Jarvis says: “Cover what you do best and link to the rest“, and this is one of the highest profile tests of that strategy.

Ken Doctor, brilliant news industry analyst at Newsonomics, has 10 reasons to watch TBD.com. Harvard’s Nieman Lab for journalism has another six reasons why they are watching the launch. Of Ken’s list, I’ll highlight two. Bucking the trend for many new high-profile news projects in the US, this is a for-profit business. Ken’s seventh point is huge:

7) It’s got a big established sales force to get it going. Both TV stations salespeople with accounts — and relationships. So TBD is an extension of that sales activity, not a start-up ad sell, which bedevils many other  start-ups.

The other thing that TBD.com has going for it is that it has the commitment of someone who already has seen some success with new models, Robert Allbritton. A few years ago, he launched Politico.com, bringing in two high profile veterans from the Washington Post to compete not only with their newspaper but also specialist political outlets like Roll Call. Politico has managed to create a successful print-web product, “not profitable every quarter but says it’s turning a profit for any given six months,” Allbritton told paidContent.org. What is more important though is his commitment to his ventures. He’s got the money and commitment to support projects past the short term.

“The first year of Politico was pretty ugly in terms of revenue,” he admitted. “You’ve got to have some staying power for these things to work.”

The Ben Franklin Project

The other project that I’m watching is John Paton’s Ben Franklin Project at the Journal Register Company. What is it?

The Journal Register Company’s Ben Franklin Project is an opportunity to re-imagine the newsgathering process with the focus on Digital First and Print Last. Using only free tools found on the Internet, the project will – from assigning to editing- create, publish and distribute news content on both the web and in print.

Succinctly, this company is looking to disrupt its own business. Instead of attacking costs by cutting more staff, they are looking to cut costs by eliminating the price of their own production using free tools. It’s not something that every organisation could do, but with 18 daily newspapers and 150 non-daily local publications, it shows the ambition of their project. This is not a tiny organisation.

In practice, the organisation set the goal for all 18 of its newspapers to publish online and in print using free online and free open-source tools, such as the Scribus desktop publishing application. They are also pursuing the same kind of community engagement, networked journalism strategy that is at the heart of TBD.com.

On 4 July, 2010, Independence Day in the US, they published their 18 daily newspapers and websites only using free tools and crowdsourced journalism. Jon Cooper, Vice President of Content, Journal Register Company wrote:

Today — July 4, 2010 — marks not only Journal Register Company’s independence from the costly proprietary systems that have long restricted newspapers and news companies alike. Today also marks the start of a revolution. Today marks the beginning of a new path for media companies whose employees are willing to shape their own future.

This is just part of Paton’s turnaround strategy for the Journal Register Company. However, in 2010, which is proving to be another tough year for the US economy (especially in some of the areas the company covers), Paton just announced that the company is 15% ahead of its revenue goals. He said:

Our goal is to pay out an extra week’s pay this year to all employees for hitting our annual target of $40 Million.

That is an amazing investment in journalists and an incentive for them to embrace the disruptive change he is advocating, but it’s so heartening to see journalists engaged and benefitting from change in the industry.

With all the talk about innovation in journalism, it is rare to see projects launch with such clear ambitions. After a lot of talk in the industry, we’ll now see what is possible.

APIs helping journalism “scale up”

A couple of days ago, I quoted AOL CEO Tim Armstrong on developing tools to help journalists “scale up” what they do. ?In a post on Poynter’s E-Media Tidbits, Megan Garber has a highlighted a good practical example of what I meant .

One thing that computers and other technology can help journalists to work more efficiently is to cut down or eliminate frequent, repetitive tasks. Derek Willis at the New York Times talks about APIs (as Derek describes APIs as “just a Web application delivering data). Derek says:

The flexibility and convenience that the APIs  provide make it easier to cut down on repetitive manual work and bring new ideas to fruition. Other news organizations can do the same.

Derek also points how savvy use of data is not just good for data visualisations and infographics, but it is also an excellent resource for New York Times’ journalists.

So if you have a big local election coming up, having an API for candidate summary data makes it easier to do a quick-and-dirty internal site for reporters and editors to browse, but also gives graphics folks a way to pull in the latest data without having to ask for a spreadsheet.

And as he said, the biggest consumer of New York Times APIs is the New York Times itself.

Projects such as building an API can be quite large (although new companies and also organisations like the Sunlight Foundation in the US and MySociety in the UK have great public service APIs and data projects), but with the benefits to both audiences, designers, developers and journalists, it makes it easier to justify the time and effort.

Opportunities from the data deluge

There are huge opportunities for journalism and data. However, to take advantage of these opportunities, it will take ?not only a major rethinking in the editorial and commercial strategies that underpin current journalism organisations, but it will take a major retooling. Apart from a few business news organisations such as Dow Jones, The Economist and Thomson-Reuters, there really aren’t that many general interest news organisations that have this competency. Most smaller organisations won’t be able to afford it on an individual level, but it leaves room for a number of companies to provide services for this space.

Neil Perkin outlines the challenge and the opportunity in a wonderful column that he’s cross-posted from Marketing Week. (Tip of the blogging hat to Adam Tinworth, who flagged this up on Twitter and on his blog.) In our advanced information economies, we’re generating exabytes of data. While we’re just getting used to terabyte disk drives, this is an exabyte:

1 EB = 1,000,000,000,000,000,000 B = 1018 bytes = 1 billion gigabytes = 1 million terabytes

To put this in perspective, I’ll use an oft-quoted practical example from Caltech researcher Roy Williams. All the words ever spoken by human beings could be stored in about 5 exabytes. Neil quotes Google CEO Eric Schmidt to show the challenge (and opportunity) that the data deluge is creating:

Between the dawn of civilisation and 2003, five exabytes of information were created. In the last two days, five exabytes of information have been created, and that rate is accelerating.

All the words spoken since the dawn of language in 5 exabytes or the amount of information created in the last two days helps illustrate the acceleration of information creation. Those mind-melting numbers wash over most people, especially in our arithmophobic societies. However, there is a huge opportunity here, which Neil states as this:

The upside of the data explosion is that the more of it there is, the better digital based services can get at delivering personal value.

And journalists can and definitely should play a role in helping make sense of this. However, we’re going to have to overcome not only the tyranny of chronology but also the tyranny of narrative, especially narratives that prejudice anecdote over data. Too often to sell stories, we focus on outliers because they shock, not because outliers are in any way representative of reality.

From a process point of view, journalists are going to need to start getting smarter about data. I think data crunching services will be one way that journalism organisations can subsidise the public service mission that they fulfil, but as I have said, it’s a capacity that will need to be built up.

Helping journalists ‘scale up what they do’

It’s not just raw data-crunching that needs to improve, but we’re starting to see a lot of early semantic tools that will help more traditional narrative-driven journalists do their jobs. In talking about how he wanted to help journalists at AOL overcome their technophobia, CEO Tim Armstrong talked about why these tools were necessary. Journalists have not been included in corporate technology upgrades (and often not included in creation of tools for their work). Armstrong said at a conference in June:

Journalists I met were often the only people in the room who never had access to a lot of info, except what they already knew.

It’s not technology for technology’s sake but tools to open up more information and help them make sense of it. Other industries have often implemented data tools to help them do their jobs, but it’s rare in journalism (outside of computer-assisted reporting or database journalism circles). Armstrong said:

You can pretty much go to any professional industry, and there’s some piece of data system that helps people scale what they do.

Journalists are being asked to do more with less as cuts go deep in newsrooms, and we’re going to have to work smarter because I know that there are some journalists now working to the breaking point.

There have been times in the last few years when I testing the limits of my endurance. Last summer, filling in behind my colleague Jemima Kiss, I was working from 7 am until 11 pm five days a week and then usually five or six hours on the weekends. I could do it for a while because it was a limited 10-week assignment. Even for 10 weeks, it was limiting the amount of time I had with my wife and was negatively affecting my health.

I’m doing a lot of thinking about services that can help journalists deal with masses of information and also help audiences more easily put stories into context. We’re going to need new tools and techniques for this new period in the age of information. The opportunities are there. Linked data and tools to analyse, sort and contextualise will lead to a new revolution in news and information services. Several companies are already in this space, but we’re just at the beginning of this revolution. We live in exciting times.

Learning from a failed journalism project

I want to applaud Jen Lee Reeves who wrote about the mistakes that she made for a journalism project that she worked on for the 2008 elections in the US at PBS’ MediaShift blog. It’s a brave thing to do, but her courage flags up a number of mistakes that are common to journalism projects, including a few that I have made myself.

She is an “associate professor at the Missouri School of Journalism, I am also a new media director at the university-owned NBC-affiliate, KOMU-TV”, and for the elections, she had an ambitious idea to bring together the coverage of several different outlets “to make it easier for news consumers to learn about their candidates leading up to election day”. She would complete the project during a fellowship at the Reynolds Journalism Institute at the University of Missouri.

In 2006 for the mid-term elections in the US, she had done something similar, but the site had been hand-coded. (I’m assuming what she means is that there was no content management system.) She realised that this would be too cumbersome, but in 2008, she opted for a “hand-built” site created by students with her oversight. Technically, she was moving in the right direction. The site took in RSS feeds from the participating news organisations, and web managers simply had to tag the content so that it appeared in relation to the right candidate and election. However, while, the site was easier to user for the news organisations, it still wasn’t clear enough to use for the audience. She said:

Unfortunately, our site was not simple. It was not clean and it was hand built by students with my oversight. It did not have a welcoming user experience. It did not encourage participation. I had a vision, but I lacked the technical ability to create a user-friendly site. I figured the content would rule and people would come to it. Not a great assumption.

Back in 2008, I still had old-school thoughts in my head. I thought media could lead the masses by informing voters who were hungry for details about candidates. I thought a project’s content was more important than user experience. I thought I knew what I was talking about.

She goes on and lists assumptions that she had about the audience, assumptions which proved false and which she believes doomed the project for failure. Go to her post and read them. She is grateful that she had the opportunity to experiment and make mistakes during her fellowship, an opportunity that she says she wouldn’t have had while being in charge of a newsroom.

If we’re paralysed by fear of failure, we’ll never do anything new. It’s not failure that we should fear but rather the inability to learn from our mistakes. For big projects like this, it’s really important to have a proper debrief. Free services on the web can bring down the cost of experimentation, and by testing what works and what doesn’t, we can not only learn from our mistakes but also make sure that we take best practices to our next project.

KPMG: UK readers far less willing to pay for digital content

Normally, I’d just add this link to Delicious, but the data is worth highlighting. KPMG has found that 81% of UK “would go elsewhere for content if a previously free site we use frequently began charging”. Only 19% would be willing to pay in the UK, while globally (the same research looked at consumer behaviour in a range of countries) 43% of consumers are willing to pay for digital content.

However, there are possibilities for publishers to pay for content that “almost three quarters of  UK consumers are willing to receive online ads in exchange lower content costs”. They are also more willing to have data collected if it would result in lower content costs. “48 percent of UK consumers would be willing to accept profile tracking, up from 35 percent in the 2008 survey.” Publishers and marketers need to take care though as 90% of consumers also expressed concern about their privacy and security online. That is high, although a slight reduction in the figures from 18 months ago.

A key finding from the report shows how consumers would like to balance privacy and targeted advertising. Tudor Aw, Head of Technology, KPMG Europe LLP, said:

(UK consumers) do see the value in allowing service providers to have access to the information necessary for more tailored services, but they are only prepared to do this if the risks are controlled and, crucially, if there is some value in it for them.

The research is well worth a look, especially for those whose revenue strategies are tied to advertising, but also for any business looking to deliver better targeted services to their customers through better use of data.

Will publishers rally to Google’s Newspass?

Matthew Buckland has a great guest post on Silicon Valley Watcher looking at Google’s Newspass payment system for publishers. (It’s cross-posted from memeburn.com)  Buckland compares the value proposition for users with Google’s system and the system that Rupert Murdoch has instituted at The Times. He likens Newspass to a cable television subscription in which a consumer makes a one off, predictable payment to receive a package of content each month. He says:

Take the analogy of satellite TV. You pay once and you get a bouquet of hundreds of channels. The transaction is simple and easy. You know you’re getting good value for money too because there is an economy-of-scale effect at work. Now imagine another scenario: What if you have to pay individually for each TV channel and go through the effort, time and extra cost to do so. It’s a no brainer really.

Of course from the consumer’s point of view, it makes a lot more sense to pay once for a bundle of content rather than paying subs to several different providers or micro-payments for individual pieces of content. However, if newspaper groups had the rationality to think about creating value propositions for their readers, they might have spared themselves the mess that they are in.

The big question, as Matthew highlights, is whether a significant number of publishers will choose to join Newspass or create their own payment system. I’m not sure that such a payment system would be possible in all jurisdictions based on competition/anti-trust law. That notwithstanding, knowing publishers, I would expect them to lobby for a relaxation of anti-competition laws in their own countries to make such a system possible rather than partner with Google, which they have as Matthew rightly points out, a love-hate relationship with. I’d say that it’s bordering on hate-hate these days, but that’s a matter of interpretation.

Matthew sees Google as a “dispassionate third party”, but with the egos in publishing and the ‘not invented here bias’, I’m not sure that publishers see Google as dispassionate or without skin in the game. Murdoch and his lieutenants, though possible an extreme example, refer to Google as a “parasite”. For them to be pushed into partnering with the likes of Google, they would have to be pressured into seeing past their almost self-destructively hyper-competitive natures and see that some loss of advantage was worth new revenue streams. In fact, I would see them being more open to partnering with another company just in an attempt to screw Google. Despite the existential threat facing some newspapers and newspaper groups, I’m not sure that they have seen the light, by which I mean the light some reportedly see with near-death experiences.

The value of data for readers and the newsroom

When I was at the BBC, a very smart producer, Gill Parker, approached me about pulling together a massive amount of data and information she was collecting with Frank Gardner trying to unravel the events that lead to the 11 September 2001 attacks in the US. Not only had Gill worked on the BBC’s flagship current affairs programme Newsnight and on ABC’s Nightline in the US, she also had worked in the technology industry. They were interviewing law enforcement and security sources all around the world and collecting masses of information which they all had in Microsoft Word files. She knew that they needed something else to help them connect the dots, and speaking with me in Washington where I was working as BBCNews.com’s Washington correspondent at the time, she asked if help her get some database help.

I thought it was a great idea. My view was that by helping her organise all of the information that they were collecting, the News website could use the resulting database to develop info-graphics and other interactives that would help our audience better understand the complex story. We could help show relationships between all of the main actors in al Qaeda as well as walk people through an interactive timeline of events. I had a vision of displaying the information on a globe. People could move through time and see various events with key actors in the story. This was a bit beyond the technology of the time. Google Earth was still a few years away, and it would have required significant development for some of the visualisations. However, on a story like this, I thought we could justify the effort, and frankly, we didn’t need to go that far. Bottom line: Organising the data would have huge benefits for BBC journalists and also for our audiences.

?Unfortunately, it was the beginning of several years of cuts at the BBC, and the News website was coming under pressure. It was beyond the scope of what I had time to do or could do in my position, and we didn’t have database developers at the website who could be spared, I was told.

A few years later as Google Earth developed, Declan Butler at Nature used data of the spread of the H5N1virus globally to achieve something like the vision I had in terms of showing events over time and distance.

It is great to see my friend and former Guardian colleague Simon Rogers move forward with this thinking of data as a resource both internally to help journalists and also externally to help explain a complex story in his work on the Wikileaks War Logs story. Simon wrote about it on the Guardian Datablog:

we needed to make the data easier to use for our team of investigative reporters: David Leigh, Nick Davies, Declan Walsh, Simon Tisdall, Richard Norton-Taylor. We also wanted to make it simpler to access key information for you, out there in the real world – as clear and open as we could make it.

As the digital research editor at The Guardian, data was key to many of my ideas (before I left this March to pursue my own projects). I even thought that data could become a source of revenue for The Guardian. Data and analysis is something that people are willing to pay for. Ben Ayers, the Head of social media and community at ITV.com, (speaking for himself not ITV) said to me on Twitter:

Brilliant. I’d pay for that stuff. Surely the kind of value that could be, er, charged for. Just sayin’ … just an example of where, if people expect great interpretation of data as part of the package, the Guardian could charge subs

As I replied to Ben, I wouldn’t advocate charging for data for the War Logs, but I would suggest that charging for data about media, business and sports. That could become an important source of income to help subsidise the cost of investigations like the War Logs. Data wrangling can be time intensive. I know from my experience in developing the media job cuts series that I wrote at the end of 2009 for The Guardian. However, the data can be a great resource for journalists writing stories as well as developing interactive graphics like the media job cuts map or the IED attack map for the War Logs story. Data drives traffic, as the Texas Tribune in the US has found, and I believe that certain datasets could be developed into new commercial products for news organisations.