The iPad is a content strategy

As a geek and a journalist who often covers technology, I pay attention to the gigabytes and gigahertz that most people don’t. To be honest, in the era of giga-computing, the average user can’t really tell the different between a dual-core computer running at 2.3Ghz or 3.2Ghz. It does whatever they need it to.

The tech spec arguments have now moved on to netbooks and mobile phones, devices where a beefier processor can mean the difference between a smooth experience and a jerky, frustrating one. The spec counters have come out in force to denounce the Apple iPad: A 1Ghz chip sounds pretty weak. No USB. No expansion slot. 3G as an option.

As they do so often, spec counters and feature fanatics miss the point. There are phones on the market that do more than the iPhone but few do those things so well. When you’ve got a device that doesn’t have the almost limitless power of today’s desktop computers, you have to make choices.

However, with the iPad, that’s actually beside the point. The iPad is first and foremost a consumer electronics device. Do you worry about the processor in your cable box? No. The set-top box is merely an electronic gateway to content, and that’s what Apple is hoping to create with the iPad.

Yes, there are other media slates out there. Just look at the nearly dozen slates that NVidia was plugging at CES. HP will release a tablet later this year, and Amazon is going to beef up the Kindle. However, none of those devices has iBooks or the apps, games, music, movies and television available from the iTunes store. No other device offers this kind of content. I’ll agree with Joshua Benton at the Nieman Lab that the iPad is focused on ‘reinventing content, not tablets‘. iTunes and its effortless integration with the iPod helped differentiate it from the crowded market of MP3 players, and the content is what Apple is hoping will ensure the success of a new type of device, the iPad.

Consumers still have to render their verdict on the iPad, but the stakes for Apple aren’t just about the success of a single device but really about a much broader digital media strategy.

Reblog this post [with Zemanta]

Generosity and post-scarcity economic media models: Why I love participatory culture

One of the stumbling blocks for media companies looking to create sustainable digital business models is that the economic models differ in fundamental ways from the predominant models of the 20th Century.

Look at the media models of the 20th Century, and they are all based to some extent on scarcity and monopoly. Printing presses are expensive and create an economic limit to the number of newspapers that any given market will support. Satellites are incredibly expensive. Cable television infrastructure is expensive. Scarcity leads to the development of stable, de facto monopolies. Sky dominates satellite television in the UK. Cable television providers are usually granted monopolies in all but the largest of cities. Again, in all but the largest markets, newspapers have come to enjoy a monopoly position. (It is why I find it a bit rich that media monopolies are railing against Google. Monopolists trying to use the law and courts to defend their position against a rising monopolist should be the plot for a farce. Why don’t we create a web television series?)

The internet is different because media companies don’t have monopoly control over the means of distribution. News International and Gannett don’t own the presses that power the internet. BSkyB doesn’t own the satellites. Comcast owns the last mile of copper, but much of the internet is beyond its control.

The cost of media production has also dramatically decreased allowing people to create media with motivations that are not economic, which seems insane and alien to people who make a living creating media. However, creating media and sharing it with others is key to many communities online. Note, I’m talking about people sharing the media that they create, not sharing media created by people whose motivations are economic. Why the distinction? Sharing is a loaded term to the ‘creative industries’ which they want to redefine as theft. I’m not talking about sharing their content.

For those who don’t understand the “culture of generosity” on the internet, please read Caterina Fake’s moving defence of participatory culture. Caterina was one of the co-founders of photo sharing site Flickr and launched “a collective intelligence decision making system” called Hunch last year. Drawing on examples from her own experience going back to 1994, she explains why:

people do things for reasons other than bolstering their egos and making money

That’s about as foreign as one can think to mass media culture. Not doing something for ego or money? Why bother?

I can tell you why I bother. A global culture of participation has been, for me, key in meeting one of Maslow’s hierarchy of needs: Belonging. Originally participatory culture was something I did in my spare time because their was no place for it in my professional work, but co-creation in journalism has been one of the most richly rewarding aspects of my career.

This is a mental bookmark for a much longer post looking at the economics of post-scarcity media, something I’ve been thinking about after meeting Matt Mason, author of The Pirate’s Dilemma. I first met Matt when I chaired a discussion about his book at the RSA, and I interviewed him for the Guardian’s Tech Weekly podcast about piracy, copyright and remix culture. Matt said that we need more study of “post-scarcity economics”, something  not seen in real-world goods but definitely in the virtual world of digital content.

Reblog this post [with Zemanta]

Journalists: Belittling digital staff is not acceptable

Patrick Smith, recently of paidcontent.co.uk, has a post about the economics of regional newspapers in the UK and he makes the case (again) that the challenges facing British regional newspapers come down quite simply to economics.

This is not about the quality of journalism – this is about economics: The web is simply more effective for advertisers – Google ads are more effective and have less wastage than an ad in the Oxdown Gazette, no matter how good the editorial quality of the paper is.

In the post, he quotes “Blunt, the pseudonymous author of the Playing the Game: Real Adventures in Journalism blog” who defines a “Web Manager” as:

An expert in cut and paste. Probably a journalist but not necessary.

My issue isn’t with Blunt. Let’s be honest with ourselves, this is a sadly typical comment in the industry regarding digital staff. It’s not even new. I’ve heard comments like this for most of my 16-year career. During this Great Recession, I can understand psychologically and emotionally where they come from: It’s an anxious time for journalists, all journalists, regardless of medium or platform.

The digitally focused staff are working just as hard to preserve professional journalism as those staff still focused on print. I have spent most of my career developing unique digital skills while producing content for broadcast and print. I have often felt that I had to work harder than traditional journalists to prove that I’m not just an ‘expert in cut and paste’. I work very hard to know my beats, work across platforms and produce high quality journalism that meets or exceeds the industry standards of print, broadcast and web journalism. I am not the only digital journalist who puts this sort of effort in. Yet the industry is still rife with the same anti-digital prejudice I witnessed ten years ago.

It’s long past time for senior figures in journalism to publicly state that demeaning digital staff is not acceptable. Here are a few basic facts about digital journalism:

  • I use a computer for much of my work. That doesn’t mean I’m a member of the IT staff.
  • I know about technology. That doesn’t mean that I’m incapable of writing.
  • My primary platform is digital. That doesn’t mean my professional standards are lower.

Prejudice towards digital journalists needs to stop. It sends a message to digital journalists that they are unwanted at a time when their skills are desperately needed by newspapers. Digital staff should not be the convenient whipping women and men for those angry and upset about economic uncertainty in the industry.

There is nothing totemic about print and paper that makes the journalism instantly better or more credible. Quality broadsheets are printed on paper just as sensationalist tabloids are. Let’s measure journalists not by the platform but by their output.

Ushahidi and Swift River: Crowdsourcing innovations from Africa

For all the promise of user-generated content and contributions, one of the biggest challenges for journalism organisations is that such projects can quickly become victims of their own success. As contributions increase, there comes a point when you simply can’t evaluate or verify them all.

One of the most interesting projects in 2008 in terms of crowdsourcing was Ushahidi. Meaning “testimony” in Swahili, the platform was first developed to help citizen journalists in Kenya gather reports of violence in the wake of the contested election of late 2007. Out of that first project, it’s now been used to crowdsource information, often during elections or crises, around the world.

What is Ushahidi? from Ushahidi on Vimeo.

Considering the challenge of gathering information during a chaotic event like the attacks in Mumbai in November 2008, members of the Ushahidi developer community discussed how to meet the challenge of what they called a “hot flash event“.

It was that crisis that started two members of the Ushahidi dev community (Chris Blow and Kaushal Jhalla) thinking about what needs to be done when you have massive amounts of information flying around. We’re at that point where the barriers for any ordinary person sharing valuable tactical and strategic information openly is at hand. How do you ferret the good data from the bad?

They focused on the first three hours of a crisis. Any working journalist knows that often during fast moving news events false information is often reported as fact before being challenged. How do you increase the volume of sources while maintaining accuracy and also sifting through all of that information to find the information that is the most relevant and important?

Enter Swift River. The project is an “attempt to use both machine algorithms and crowdsourcing to verify incoming streams of information”. Scanning the project description, the Swift River application appears to allow people to create a bundle of RSS feeds, whether those feeds are users or hashtags on Twitter, blogs or mainstream media sources. Whoever creates the RSS bundle is the administrator, allowing them to add or delete sources. Users, referred to as sweepers, can then tag information or choose the bits of information in those RSS feeds that they ‘believe’. (I might quibble with the language. Belief isn’t verification.) Analysis is done of the links, and “veracity of links is computed”.

It’s a fascinating idea and a project that I will be watching. While Ushahidi is designed to crowdsource information and reports from people, Swift River is designed to ‘crowdsource the filter’ for reports across the several networks on the internet. For those of you interested, the project code is made available under the open-source MIT Licence.

One of the things that I really like about this project is that it’s drawing on talent and ideas from around the world, including some dynamic people I’ve had the good fortunte to meet. Last year when I was back in the US for the elections, I met Dave Troy of Twittervision fame who helped develop the an application to crowdsource reports of voting problems during the US elections last year, Twitter Vote Report. The project gained a lot of support including MTV’s Rock the Vote and National Public Radio. He has released the code for the Twitter Vote Report application on GitHub.

To help organise the Swift River project for Ushahidi, they have enlisted African tech investor, Jon Gosier of Appfrica Labs in Uganda. They have based Appfrica Labs loosely on Paul Graham’s Y Combinator. I interviewed Jon Gosier at TEDGlobal in Oxford this summer about a mobile phone search service in Uganda. He’s a Senior TED Fellow.

There are a lot of very interesting elements in this project. First off, they have highlighted a major issue with crowdsourced reporting: Current filters and methods of verification struggle as the amount of information increases. The issue is especially problematic in the chaotic hours after an event like the attacks in Mumbai.

I’m curious to see if there is a reputation system built into it. As they say, this works based on the participation of experts and non-experts. How do you gauge the expertise of a sweeper? And I don’t mean to imply as a journalist that I think that journalists are ‘experts’ by default. For instance, I know a lot about US politics but consider myself a novice when it comes to British politics.

It’s great to see people tackling these thorny issues and testing them in real world situations. I wonder if this type of filtering can also be used to surface and filter information for ongoing news stories and not just crises and breaking news. Filters are increasingly important as the volume of information increases. Building better filters is a noble and much needed task.

Reblog this post [with Zemanta]

Poynter asks: Are journalists giving up on newspapers?

The Poynter Institute in the US hosted an online discussion asking if journalists are giving up on newspapers after high-profile departures there including Jennifer 8. Lee, who accepted a buy out at the New York Times, and Anthony Moor, who left newspapers to become a local editor for Yahoo. Moor told the US newspaper trade magazine Editor & Publisher – which just announced it is ceasing publication after 125 years:

Part of this is recognition that newspapers have limited resources, they are saddled with legitimate legacy businesses that they have to focus on first. I am a digital guy and the digital world is evolving rapidly. I don’t want to have to wait for the traditional news industry to catch up.

This frustration has been there for a while with digital journalists, but many chose to stay with newspapers or sites tied to other legacy media because of resources, industry reputation and better job security. However, with the newspaper industry in turmoil, now the benefits of staying are less obvious.

Jim Brady, who was the executive editor of WashingtonPost.com but is now heading up a local project in Washington DC for Allbritton Communications, said on Twitter:

A few years ago, the risk of leaping from a newspaper to a digital startup was huge. Now, the risk of staying at a newspaper is also huge.

Aside from risk, Jim echoed Moor’s comments in an interview with paidContent:

Being on the digital side is where my heart is. Secondly, I think doing something that was not associated with a legacy product was important.

In speaking with other long-time digital journalists, I hear this comment frequently. Many are yearning to see what is possible in terms of digital journalism without having to think of a legacy product – radio, TV or print. There is also the sense from some digital journalists that when print and digital newsrooms merged that it was the digital journalists and editors who lost out. In a special report on the integration of print and online newsrooms for Editor & Publisher, Joe Strupp writes:

Yet the convergence is happening. And as newsrooms combine online and print operations into single entities, power struggles are brewing among many in charge. More and more as these unifications occur, it’s the online side that’s losing authority.

It’s naive to think that these power struggles won’t happen, but they are a distraction that the industry can ill afford during this recession. In the Editor & Publisher report, Kinsey Wilson, former executive editor of USA Today and editor of its Web site from 2000-2005, said that during the convergence at USA Today and the New York Times:

We both had a period of a year or two when our capacity to innovate on the Web stopped, or was even set back a bit

Digital models are emerging that are successful. Most are focused and lean such as paidContent (although it has cut back during the recession, I’d consider its acquisition by The Guardian, my employer, as a mark of success) and expanding US political site Talking Points Memo. There are opportunities in the US for journalists who want to focus on the internet as their platform.

Back to the Poynter discussion, Kelly McBride of Poynter said during the live discussion:

I talk to a lot of journalists around the country. I don’t think they are giving up journalism at all. I do think some of them have been let down by newspapers. But a lot are holding out. They are committed to staying in newspapers as long as they can, because they are doing good work.

It’s well worth reading through the discussion. I am sure that many journalists have some of the same questions.

What was the verdict? Poynter discussion - Are journalists giving up on newspapers?

Landrush for local: NowPublic, Everyblock and now Outside.in

A common joke amongst journalists is that all we need is two examples to proclaim it a trend, but we’ve got much more than that when it comes to rush to build local media empires in the US. In June, AOL bought two local services, Patch, which provides news to small towns and communities, and also Going, which provides a local events listing platform. MSNBC.com bought Adrian Holovaty’s hyperlocal aggregator Everyblock in August. In September, local news network Examiner.com owned by billionaire Philip Anschutz‘s Clarity Media Group bought citizen journalism site NowPublic. Now, we have another major move in hyperlocal with CNN and others investing $7m in aggregator Outside.in. CNN will not only invest in the site, but it will also feature feeds from Outside.in.

Outside.in founder Steven Berlin Johnson called the investment and content deal:

a vote of confidence in the platform we’ve built at outside.in, but perhaps more important it’s an endorsement of hyperlocal and the ecosystem model of news that many of us have been championing for years now.

Fred Wilson, a venture capitalist and the principal of Union Square Ventures, is an investor in Outside.in, and he makes the passionate case for people covering their own communities.

My unwavering belief is that we will cover ourselves when it comes to local news. We are at the PTA meetings, the little league games, and the rallies to save our local institutions, so who better to cover them than us? This is what hyperlocal blogging is all about and it is slowly but surely it is gaining steam.

CNN’s partnership with Outside.in can be seen as a simple response to a competitor, but with all of the deals in this space, I guarantee that 2010 will see additional deals and development. Add to this location based services and mobile, and you’ve got somethig very interesting happening.

The promise of ‘pro-am’

As Fred says, people will cover their own communities, and we have seen some interesting hyperlocal projects including the pro-am projects of MyMissourian in Columbia Missouri and BlufftonToday in South Carolina or hyperlocal projects here in London like William Perrin’s Talk About Local. I personally like pro-am models where professional journalists cover the official life of the community – council meetings, crime, sports, schools and other local issues – while the site provides a platform for the community to cover itself and the full range of lived experience there. As Clyde Bentley, who set up MyMissourian, found out, readers didn’t want to write about politics as much as they wanted to write about religion, pets and the weather. Here are the lessons he learned from MyMissourian:

  • Use citizen journalism to supplement not replace.
  • UGC isn’t free.
  • Online attracts the eager, but print serves the masses.
  • Give people what they want, when they want it and how they want it.
  • Get rid of preconceptions of what journalism is.
  • Every day people are better ‘journalists’ than you think.

Lessons learned from failure and success

Despite all of this energy and experience, hyperlocal has still seen more high profile failures than successes such as Backfence and the Loudoun Extra project by the Washington Post. Even in those failures, there are lessons to be learned. Mark Potts who was behind Backfence said that one frequent mistake of hyperlocal projects is that they aren’t local enough.

He believes the key is to focus on a community of around 50,000 people. Covering a bigger area makes it harder to keep people interested. “You care less the farther it gets from home.”

The difficulty for Loudoun Extra was integration with WashingtonPost.com and a lack of community outreach, according to Rob Curley who headed up the project.

That doesn’t mean that we don’t have success stories, but again, the secret to success seems to be a laser light focus on niche topics and keeping the hyper in hyperlocal. Crain’s New York just profiled Manhattan Media, which has seen revenue grow fivefold since 2005 and even more surprising is that ad revenue has continued to grow in the midst of the Great Recession. It’s a multi-platform, multi-revenue stream model with newspapers and websites, and their events business now contributes 20% of their revenue.

The lessons I take away is that newspapers trying to be all things to all people with no sense of place or focus are suffering mightily during the recession. Focus is key both in terms of topic and geography, and seeing as this is about engaging not only a virtual but very real world community, I’ll add my basic advice about blogging and social media: Be passionate and be real.

Whether we see a strong recovery in 2010 or not, local will be one growth area, and journalists looking for new opportunities should watch this space for ‘help wanted’ signs.

The dangerous distraction of GWOG – the Global War on Google

Rupert Murdoch and his lieutenants’ Global War on Google might make for entertaining copy for journalists who enjoy an old fashioned media war with titans going toe-to-toe, but Adam Tinworth has pointed out the danger of taking this rather noisy display of “posturing and PR” too seriously. It is distracting people in the news and information business from dealing with the real issues besetting our businesses.

But in this war of words, the true issues seem strangely absent. Where’s the discussion of how newspapers can compete for readers in the age of the attention crash? Where’s the careful analysis of the role of the general publication when their audience’s time is being slowly eaten away by a million and one niche websites that speak more directly to them than anything a national paper publishes? Who is talking about how you rebuild publishing companies to account for the new economic reality of internet publishing.

These are huge issues that are being completely ignored in the bluster of Murdoch’s posturing. These issues are critical in the development of any paid content strategy.

I would like to think that behind the public bluster that these issues are being discussed in strategy meetings across the industry, but I doubt it. I would wager that Adam and I have discussed these issues over beers more than they have been discussed in any boardroom. I feel relatively confident that I would win this wager.

While Adam highlights the scarcity of attention and abundance of content, industry leaders still boast about the indispensability and exceptional nature of their content. Too many newspaper editors still believe that their competition comes from other newspapers, not from music streamed on Spotify, TV from the BBC’s iPlayer or Apple’s iTunes or Modern Warfare 2 (which sold 4.7m copies in 24 hours). Newspaper journalism is competing for time and attention against a myriad of other choices in an over-saturated media environment. Until news organisations (and content creators of all stripes) begin to grapple with the economics of abundant content much of it of very high quality, we’re not going to take the many steps necessary to create sustainable businesses that support journalism.

Print-digital paid content debates require reality

If you have any hope of solving a problem, you better have a clear sense of what the problem is and what causes it. Listening to the paid content debates in the newspaper industry, the debate has become polarised and filled with assumptions and assertions rather than clear-headed thinking informed by research and data.

One assertion that I’d like to challenge right up front is the oft repeated claim that no one makes money with digital content. In the late 90s, I often heard editors say, “The internet is great, but no one has figured out how to make money with it.” The dot.com crash only reinforced this view. However, internet use continued to grow through the crash. Advertising shifted online, especially after Google introduced its search-based advertising model. Within a year or two after the crash, many large news sites like the New York Times and the Washington Post were making money. A 2008 study in the US by Borrell Associates found almost all of 3,100 news websites surveyed were profitable.

The Great Recession has hit both the print and digital businesses of the newspaper industry with a vengeance putting tremendous pressure on newspapers. As I’ve said, the economic crisis has reopened divisive debates between the print and digital sides of the newspaper business. To get through this crisis and rebuild sustainable businesses that support professional journalism, we’ve got to get real about the economic reality we face, not just in the depths of this recession but after it ends.

Steve Yelvington has more experience with digital journalism than many people have in journalism full stop. He fights bluster with data and even a graph. Most news websites exhibit a long tail with a hump, he writes.

Most of those visitors come once or twice, probably following a link
from a search engine or another website. They’re looking for something
very specific. They find it (or not) and leave.

Then the number drops like a rock. Hardly anybody comes five times in a month.

But over on the right side you have an interesting little lump.

That lump is your loyalists. You’re going to have a hard time getting people to pay who come via a search engine, look at a page and leave. However, your loyalists see value in what you do and might be willing to pay. Working to convert more users to loyalists and giving your loyalists some way to pay for the content they value might be a revenue model that begins to add a revenue stream in addition to business cycle sensitive advertising.

Steve argues for a sophisticated model that leaves visitors who only look at one or two pages “unmolested” but asks those who view several pages to register with the site. News group McClatchy used this model, and the FT uses this model as well.

Determining how many pages people should see before registering and paying and what to charge are unknowns, but with a flexible system with graduated fees and clear benefits, this is a much more sophisticated model than some of the absolutist, binary solutions being thrown around.

Rewarding and building loyalty

I think that loyal readers should be rewarded, and I believe that they will reward publications they value with not only their traffic but also their monetary support. I think that newspapers could do much more to convert some passing traffic to more loyal readers, but it’s going to take better design and more engagement from journalists, which I know will be difficult with slimmer staffs. Not all journalists want to engage with readers, but I think that those who do and do it well should be encouraged and supported.

To successful deal with the problems that we’re facing during the recession and will be facing once growth returns, we need more data, more research, more experimentation and more sophistication in our discussions about business models. There is no silver bullet, no one solution that will save journalism. We’re going to have to try a number of things and a number of ways to earn money to support professional journalism. However, one of the first steps we need to take is to get past these lazy assertions and out-dated assumptions about the business. Lots of the conventional wisdom is based in the print-digital culture wars in newspaper newsrooms, and it’s in desperate need of updating.

AP’s Curley v Curley and News Corp’s Rupert v Rupert

The newspaper industry has woken from its slumber, and they have realised the enemy is not the internet. The enemy is actually you and me, those of us who use the internet. According to the CEO of the Associated Press Tom Curley, “third parties are exploiting AP content without input and permission”, and:

Crowd-sourcing Web services such as Wikipedia, YouTube and Facebook have become preferred customer destinations for breaking news, displacing Web sites of traditional news publishers.

I’m linking to this on one of these third parties sites, Google News, which has a commercial hosting agreement with the AP. Those bloody paying parasites!

Curley was speaking at the World Media Summit in Beijing’s Great Hall of the People. Does Curley know who added those links to Wikipedia, shared those stories on Facebook or uploaded those videos to YouTube? Internet users, you, me and millions of others around the world. For Mr Curley, the internet is a “den of thieves“, says Jeff Jarvis.

Jeff offers his argument against this view of the world. However, I’d like to stage another bit of a debate, one possible through the virtual time travel of the internet. Let’s get ready to rumble! In this corner, we have the Curley of 2009, who argues:

We content creators must quickly and decisively act to take back control of our content.

With that jab, a slightly younger, slightly more optimistic Curley of 2004 lands a right hook: “The future of news is online, and traditional media outlets must learn to tailor their products for consumers who demand instant, personalized information.” The Curley of 2004 instead sees this future from his own past:

the content comes to you; you don’t have to come to the content so, get ready for everything to be ‘Googled,’ ‘deep-linked’ or ‘Tivo-ized’.

Ouch Tom 2009, that looks like it hurts. Next up in our virtual cage match is a spry 78-year-old, Rupert Murdoch! Let’s start with the Rupert of 2009:

The aggregators and plagiarists will soon have to pay a price for the co-opting of our content. But if we do not take advantage of the current movement toward paid content, it will be the content creators — the people in this hall — who will pay the ultimate price and the content kleptomaniacs who triumph.

Fighting back is the fighting fit Rupert “The Digital Immigrant” Murdoch of 2005:

Scarcely a day goes by without some claim that new technologies are fast writing newsprint’s obituary. Yet, as an industry, many of us have been remarkably, unaccountably complacent. Certainly, I didn’t do as much as I should have after all the excitement of the late 1990’s. I suspect many of you in this room did the same, quietly hoping that this thing called the digital revolution would just limp along.

It’s a shame to see this come to blows. These guys should really talk to each other. With Rupert 2009 on the ropes, Rupert 2005 delivers this shot:

What is happening is, in short, a revolution in the way young people are accessing news. They don’t want to rely on the morning paper for their up-to-date information. They don’t want to rely on a god-like figure from above to tell them what’s important. And to carry the religion analogy a bit further, they certainly don’t want news presented as gospel.

Instead, they want their news on demand, when it works for them.

They want control over their media, instead of being controlled by it.

Ouch. Can’t you guys make up your mind? Has the Great Recession changed consumer internet behaviour and media consumption trends? Or did the industry’s complacency finally catch up with it?

New York Times: More innovation in commenting

As I wrote recently, news organisations have only begun to scratch the surface in terms of innovative interfaces that could encourage readers to explore the rich content on their sites and also increase and improve reader interaction. When I wrote that post, the Washington Post had debuted a Django-based commenting system called WebCom that reminded me of ThinkMap’s Visual Thesaurs. WebCom reflects comment popularity, which can become a self-reinforcing cycle. I will be interested to see if they might add another layer to the interface that allows people to explore the conversation based on themes or topics. This could be easily achieved by using Thomson-Reuter’s Calais semantic analysis system to expose themes in the comments.

Now the New York Times has debuted a new visual commenting tool. It’s debut is being used to help people discuss and explore some of the issues regarding the healthcare (some might argue the health insurance) debate in the US. The boxes all relate to an issue in the debate, and a drop-down menu allows you to jump to that topic and see a brief overview of the issue. The relative size of the boxes reflect the number of comments, and hovering over the people icons at the bottom of the boxes allow you to quickly see a bit of the comment. You can also also easily jump to replies to comments that you have left. It appears that the topics aren’t generated organically by the discussion but are created by the New York Times editorial staff. In some ways, it’s a slightly advanced, and somewhat stilted form of threading. It’s almost more of a discussion system than it is strictly a commenting system. nytimesdebate.gif

At the time of writing this post, there are few comments so it’s difficult to see how it will work both conversationally and technically as the volume of comments increases. That will be the real test of the system because one of the reasons why news sites need interface innovation in commenting systems is because of the volume of comments on media sites.

Here on Strange Attractor, the comments tend to be more off-site, posts written in response to what Suw and I write. Very rarely do we have a high volume of comments on the blog, which makes it easy for us to manage and for our readers to engage with. We don’t write about politics or hot button social issues. Rather, we write about a very specialist, niche topic. The conversations tend to be pretty high level, and we love our readers because of the level of intelligence that they bring.

On news sites, the volume of comments on the posts is much, much higher, and it quickly becomes difficult for journalists and readers to follow the discussion and have any meaningful interaction. The comments tend not to respond to each other but rather are usually a string of unrelated statements. Most of the current solutions all have their drawbacks. Threading has its issues because it tends to fragment the discussion, which is what I fear this New York Times interface will do. Voting up, or down, Digg-style helps in some ways but suffers from the same issues of the self-reinforcing popularity that WebCom faces. Again, a few criticisms don’t mean I think these experiments aren’t worthwhile. Far from that, I think it’s great to finally see some interface exploration in terms of commenting and not just content presentation by news websites. Hopefully, this is a sign of things to come. It’s long past time that news organisations realise that the volume of comments they receive requires something more than flat, linear comment threads below blog posts or articles. Done right, it will help increase participation, user experience, interaction and maybe even the quality of the conversation.