Paid-vs-Free

Just a quicky: Stephanie Booth has a great post on paid vs free content, taking the kind of sensible and level-headed approach that I am failing to see from most media companies. Key for me was this bit:

This is a tough message to pass on to a client: “The money you’re paying me to write is actually marketing money. The content I provide will add value to your website for years to come, and help build your reputation and credibility. How much is that worth?” It’s not just words on a screen, disposable stuffing like so much of what is unfortunately filling our newspapers today. Scanned today, gone tomorrow. Great writing, online, has no expiry date.

Dead right.

The future of context and the future of journalism

Matt Thompson has been doing deep thinking about the future of journalism, since he and Robin Sloan created the EPIC flash animations while at Poynter at the urging of Howard Finberg. Matt has been thinking about context and ways that journalism can transcend shortcomings that were a product of linear platforms. He explored it during a Reynolds Fellowship at the University of Missouri and at the blog Newsless. Yesterday, he explored the topic at a panel with Jay Rosen and Tristan Harris of Apture. I’ve had the pleasure of meeting all three panelists in the past. This discussion did something I don’t see often in terms of future of journalism conversations, it actually moved things forward and has jump-started a very good discussion on specific action to take next.

I see a divide. Covering traditional media’s shift to digital media, I hear strategies for more content, strategies to optimise content and the production of content and ways to monetise content. Content. Content. Content. The content industries think that the recipe for digital success is to digitise and monetise content. It ignores the fact that more content is competing for a finite audience and a reduced advertising spend in the midst of a frail recovery. On the other side of the divide, you have digital companies that know the competition is not over content but attention. Who’s winning in the battle for attention? The average time spent reading news on local newspaper websites is 8-12 minutes a month. The average time spent on Facebook is seven hours a month.

Matt thinks the volume of “episodic” news, hundreds of headlines washing over us each day might be the problem. The media is drowning audiences in a flood of content of its own creating. Matt said:

But mounting evidence indicates that this approach to information is actually totally debilitating. Faced with a flood of headlines on an ever-increasing variety of topics, we shut off. We turn to news that doesn’t require much understanding – crime, traffic, weather – or we turn off the news altogether.

Matt was quoted on Twitter as saying: “People don’t want more info; they want the minimum info they need to understand a topic.”

Being inundated with information isn’t making us more informed. In fact, as Matt points out, it’s leading to a numbness, a negative feedback loop that sees news as a problem that needs solving. What are we as journalists doing to solve the problem? Creating more duplicative content is only reinforcing the problem, causing audiences to shut off. I transit through Kings Cross every day, people handing out freesheets of all descriptions are ignored only slight less than chuggers (charity muggers). Good luck with a paid content strategy based on content that people wish there was less of anyway.

Matt suggests that instead of “episodic news” and topic pages of links to these snippets of news that we need to produce “systemic understanding”.

Journalists spend a ton of time trying to acquire the systemic knowledge we need to report an issue, yet we dribble it out in stingy bits between lots and lots of worthless, episodic updates.

Matt asks some key questions on the how, what we can do digitally that overcomes some of these problems of journalism, structurally and also in terms of re-constituting journalism as a self-sustaining business built on delivering value to audiences. These are the questions that I’m asking right now, and what Suw and I have been thinking about from 5-9 over the last 18 months. We’ve got some pretty clear ideas on the how. (Yes, I’m being a bit cryptic, and unfortunately, I’m going to have to leave it at that dear reader.)

The great thing about having such a digitally native panel is that you can dive deep into their statements and continue the conversation on a site they set up for the purpose. Matt’s opening statement is at Newless. Jay has posted his opening statement on PressThink, and Tristan has posted his statement on his blog. Steve Myers did a great bit of live blogging at Poynter from the panel, and Elise Hu has a great summary of the panel as well.

Reblog this post [with Zemanta]

FOR HIRE: I’m leaving the Guardian

FOR HIRE: That was the subject line of an email that I sent to Neil McIntosh, then of the Guardian, in the summer of 2006. I had met Neil at the Web+10 conference at the Poynter Institute in the US in 2005 before I came to London, and the email was a long shot. I wanted to stay in the UK with my then girlfriend, now wife, Suw, and my options were running out at the BBC. I had managed to extend my temporary assignment in London once, but now we were bracing for my return to the US to my old post, Washington correspondent of BBCNews.com. We expected to be separated by an ocean for months. Fortunately, that’s not what happened. A few days later I met with Emily Bell and, after what can be described more as a meeting of the minds than a job interview, I had an offer.

Now, three and a half years later, I’m joining many of my colleagues in accepting another offer from the Guardian, voluntary redundancy. My last day is 31 March. I don’t have a new position confirmed at this point, although Suw and I have a number of exciting possibilities. Like my colleague Bobbie Johnson, I’ve picked up a bit “entrepreneurial zeal” not only from the technology pioneers that I’ve covered, but also from the journalism pioneers that I’ve worked with both at the BBC and the Guardian. Suw and I want to continue to push the boundaries in our fields and we’re both open to new opportunities. If you’ve got a cutting edge journalism or social media project, get in touch.

It’s been a real honour to work at the Guardian and I’m grateful to everyone who helped me. We’ve achieved a lot in the past three and a half years, although it felt like we were always impatient to do more.

Despite the wrenching changes in journalism right now, I’m optimistic. Suw and I are excited about writing the next chapter of our careers. For me, I’m hoping it will be one that helps journalism make the transition to the future. I have almost 15 years of experience in digital, multi-platform journalism, both in strategy, implementation and just doing it, and I’m thrilled by some of the options that Suw and I have before us at the moment. Nothing is settled, though, so I’m still open to offers, as well as being available for short-term writing and freelancing. If you’ve got something exciting in the works and need one of the most experienced hands in digital journalism, get in touch.

The truth does not lie midway between right and wrong

There’s a habit amongst journalists to act as if there’s a continuum between opposing viewpoints and that the truth must therefore lie somewhere roughly in the middle, especially on health, science and certain tech stories. We saw it before with the reporting on now disgraced ‘scientist’ Andrew Wakefield and his very well debunked claims that MMR causes autism. And we’ve seen it regularly since.

Now the House of Commons science and technology committee has examined homeopathy provision on the NHS and has concluded that evidence shows homeopathy works no better than placebo and that the NHS should not provide or recommend it. The media seems to have decided that solid science is one end of a continuum of truths with homeopaths at the other end, and that it’s their job to shilly shally around in the middle and to present both sides in a ‘fair and balanced’ manner. To which I call bullshit.

Science isn’t about the balance of opinions but the balance of evidence. Evidence is bigger than any one person or research institute: it’s the findings of experiments that can be consistently repeated by anyone, anywhere with the right knowledge and equipment. When the evidence stacks up in favour of one theory, then that’s the theory that we must hold as true until/unless reliable and repeatable experiments lead us to refine or change it.

And that’s the thing. The reliable and repeatable experiments show that homeopathy performs no better than a placebo. Yet journalists seem intent on portraying this story as “MPs say one thing, homeopaths say something else, and who knows who’s right?!”. The Guardian, for example, uses a lot of fightin’ words (my bold):

To true believers, including Prince Charles, homeopathy is an age-old form of treatment for a wide range of ills. To most scientists, it is nothing more than water. Today the sniping between the devotees and the denialists became a head-on collision, as the House of Commons science and technology committee challenged the government to live by its evidence-based principles and withdraw all NHS funding from homeopathic treatment. …

…the money could be better spent, said the committee, accusing the Department of Health of failing to abide by the principle that its policies should be evidence-based. …

The Prince’s Foundation for Integrated Health countered the MPs’ attack by citing a peer-reviewed scientific study in the International Journal of Oncology which, it said, proved that homeopathic remedies were biologically active. …

But this isn’t a fight. It’s not seconds out, round one. Evidence points overwhelmingly to the conclusion that homeopathy doesn’t work.

The Guardian, along with many other news outlets, also gives quite a weight to pro-homeopathy voices as if all opinions are equal and that this is a debate. Ben Goldacre is collecting examples over on Bad Science. The BBC, for example, comes in for a lot of criticism in Ben’s comments:

fgrunta said,

I just saw this story break on BBC News. They brought on a Homeopath GP who just went and told I don’t know how many millions of viewers that the “evidence is clear” that homeopathy works and she then proceeded to start quote papers.

Grrr….

And:

ALondoner said,

An excellent report, nice to see that MPs can sit down, review the evidence and then say something intelligent.

On the other hand, The BBC (and some other news outlets) seem to be so obsessed with giving each side of the story, they make it sound like there is reasonable evidence for both points of view.

When someone is found guilty of a crime, journalists doesn’t put guilty in quotation marks. Nor do they pick a self appointed expert to rant about why that person was actually not guilty. So why doesn’t the BBC simply report that supporters of homoeopathy say it works, but all independent reviews shows that it does not.

Instead, we get “many people – both patients and experts – say it is a valid treatment and does work”, without at least caveating that with “but all systematic reviews show it is no better than placebo” and explaining who these “experts” are. Experts in giving homeopathy perhaps, but are they experts in telling whether it works better than placebo?

Just sent a few comments to the BBC via their well hidden complaints website:

https://www.bbc.co.uk/complaints/forms

The problem is, this is not a debate. The evidence is in: Homeopathy doesn’t work. Perpetuating the myth that taking ‘remedies’ which amount to nothing more than sugar pills or water that’s been shaken up a bit is potentially harmful. In fact, people die because they are convinced that homeopathy will work and so don’t seek proper medical attention. The media is complicit in those deaths because they help to keep the myth of homeopathy alive.

What I don’t understand is why journalists feels the need to create this false dichotomy in the first place. When astronomers discover a new planet orbiting a distant star, journalists don’t start looking for dissenting astrologers. When palaeontologists discover a new dinosaur, journalists don’t seek out creationists or intelligent design advocates to say that it’s all just a big trick by God. Why is it that in other fields they feel at liberty to talk utter hogswash and to ignore solid evidence?

This isn’t a science problem, or a science communications problems, this is a serious journalistic problem. This is journalists imposing a frame onto the story that is utterly inappropriate. This leads to a misrepresentation of the evidence and does a serious disservice to everyone who reads these stories and takes them at face value.

There is always some doubt in science, but this does not mean that science is unreliable or that opposing views are always as valid. In homeopathy, the level of doubt is very, very low, so low in fact that I feel perfectly happy saying “homeopathy doesn’t work”, because that’s the hypothesis that’s been proven correct time and time again.

Other scientific theories have more doubt and there we do need to be careful to be clear about what levels of confidence we should have. But this doesn’t mean that even in those stories that we need to give equal weight to for and against: we just need to be clear about how tentative or firm the science is.

And again, let me reiterate: This is important not just from a journalistic integrity point of view, but because misinformation kills. Actual people actually die. They actually get ill, actually fail to get the right treatment, and actually suffer because of it. Any action on the part of journalists that encourages people to believe in provably ineffective treatments is unethical. I just wish more journalists thought through what they are writing when covering stories like MMR and homeopathy.

The media, the internet and the 2010 British election

Last night, I went to a panel discussion at the Frontline Club here in London looking at the role that the internet and social media might play in the upcoming general election. I wrote a summary of the discussion on the Guardian politics blog. As I said there, the discussion was Twitter heavy, but as Paul Staines aka Guido Fawkes of Order-order.com said, Twitter is sexy right now.

The panel was good. Staines made some excellent points including how the Conservatives were focused on Facebook rather than Twitter for campaigning. Facebook has more reach and was “less inside the politics and media bubble“, Staines said.

Alberto Nardelli of British political Twitter tracker, Tweetminster, said that the election would be decided by candidates and campaigns not things like Twitter. No one on the panel thought the internet or the parties’ social networking strategies would decide the British election. Alberto said that Twitter’s impact would be more indirect. People are sharing news stories using Twitter, which is causing stories to “trickle up” the news agenda.

Chris Condron, head of digital strategy at the Press Association, made an excellent point that so many discussions of social media focus on its impact on journalism and not its impact on people. Facebook and Twitter allow people to organise around issues, which is another form of civic participation. As I said on my blog post at the Guardian, I would have liked for the panel to explore where this organisation around issues might have an impact in marginal constituencies.

Like so many of these discussions, I thought the questions were binary and missed opportunities to explore the nuance of several issues. The moderator, Sky News political correspondent Niall Paterson implied in his questions that if social media didn’t decide the election that it had no relevance. It was an all or nothing argument that I’ve heard before. Change is rarely that absolute. In the US, the role of the internet has been developing in politics for the past decade. Few people remember that John McCain was the first candidate to raise $1m online, not in 2008 but in 2000.

Paterson portrays himself as a social media sceptic, and I can appreciate that. I can appreciate taking a contrarian position for the sake of debate. However, some of his points last night came off as being ill-informed. The panel was good in correcting him, but he often strayed from moderating the discussion to filibustering.

His portrayal of the Obama campaign was simplistic. Alberto said at the Frontline Club that Obama had a campaign of top down and bottom up, grass-roots campaigning, and as British political analyst Anthony Painter pointed out, Obama’s campaign was a highly integrated mix of traditional campaigning, internet campaigning and mobile. (Little coverage focused on Obama’s innovative mobile phone efforts. Most people don’t see the US as a particularly innovative place in terms of mobile, but it was one of the more sophisticated uses of mobile phones in political campaigning I’m aware of.) I love how Anthony puts it, Obama’s operation was “an insurgent campaign that was utterly professional”.

Paterson also implied that Twitter would tie journalists to desks. The only thing tying journalists to desks are outdated working methods. I’ve been using mobile data for more than a decade to stay in the field close to stories. During the 2008 election in the US, my Nokia multimedia phone was my main newsgathering tool. It allowed me to aggregate the best stories via Twitter and use Twitpic to upload pictures from my 4000 mile roadtrip and from the celebrations outside the White House on election night. As I said on Twitter during the discussion:

moderator makes assumption that social media chains journalists to desk. Ever use a mobile phone? It’s mobile!

Sigh. Sometimes I feel like a broken record. Technology should be liberating for journalists, and more journalists should be exploring the opportunities provided by mobile phones and services like Twitpic, Qik, Bambuser and AudioBoo.

You can watch the entire discussion from the Frontline Club here, and here is Anthony Painter’s excellent presentation on the state of internet campaigning in the US and the UK:

Lessons in statistics

This week brought two really fascinating insights into the world of statistics. The first was from a most unusual source: The Daily Mail (not my usual read – the link was posted to the Bad Science forum). They had run with the story Cracked it! Woman finds six double yolk eggs in one box beating trillion-to-one odds, which was then pretty rigorously debunked by the Mail’s own Michael Hanlon.

In Eggs-actly what ARE the chances of a double-yolker? Hanlon points out that young hens tend to produce more double yolks than older hens, and that flocks tend to be of the same age, so six double-yolkers is not an unusual occurrence for a young flock. Further more, double-yolkers are heavier than single yolked eggs, so when the eggs are sorted by weight they will tend to wind up in the same box. So really, a box of six double-yolkers isn’t that much of a surprise.

The second was from WNYC’s RadioLab, a great radio show and podcast from NPR in the States which has now become a must-listen for the gym. I love RadioLab – they cover science stories in an engaging, entertaining and though provoking way. Their programme from Sept 9 last year was called Stochasticity, “a wonderfully slippery and smarty-pants word for randomness”. The first two sections should be compulsory listening for every journalist:

A Very Lucky Wind
Laura Buxton, an English girl just shy of ten years old, didn’t realize the strange course her life would take after her red balloon was swept away into the sky. It drifted south over England, bearing a small label that said, “Please send back to Laura Buxton.” What happened next is something you just couldn’t make up – well, you could, but you’d be accused of being absolutely, completely, appallingly unrealistic.

On a journey to find out how we should think about Laura’s story, and luck and chance more generally, Jad and Robert join Deborah Nolan to perform a simple coin-toss experiment. And Jay Koehler, an expert in the role of probability and statistics in law and business, demystifies some of Jad and Robert’s miraculous misconceptions.

And then the first half especially of:

Seeking Patterns
Fine. Randomness may govern the world around us, but does it guide US?? Jonah Lehrer joins us to examine one of the most skilled basketball teams ever, the ’82 – ’83 ’76ers, and wonders whether or not the mythical “hot hand” actually exists.

Then we meet Ann Klinestiver of West Virginia, an English teacher who was diagnosed with Parkinson’s in 1991. When she began to take a drug to treat her disease, her life changed completely after one fateful day at the casino. Jonah discusses the neurotransmitter dopamine and the work of Wolfram Schultz, whose experiments with monkeys in the 1970s shed light on Ann’s strange addiction and the deep desire for patterns inside us all.

Statistics is something that you constantly see journalists getting wrong. The Bad Science forums are rife with examples of statistics abuse. It’s not surprising, because it’s actually very easy to get statistics wrong: Probability in particular can be very counter-intuitive and assumptions that seem to be common sense are frequently just our brains playing silly buggers with us. Personally, I think that all journalists should have to study statistics, even the freelances, because it’s so easy to get it wrong and so useful when you get it right. But, in the meantime, I’d settle for more people listening to shows like RadioLab and reading blogs like Good Math, Bad Math, Bad Science, or Junk Charts.

Newspapers and Microsoft: Dysfunctional corporate cultures and the fall of empires

Steve Yelvington flagged up a comment piece on the New York Times from Dick Brass, a vice president at Microsoft from 1997 until 2004. Brass worked on Microsoft’s tablet PC efforts, something I remember covering at Comdex in 2002. Despite a huge push by Microsoft, they never became mainstream outside of a few niche applications, and Brass blames it in part from in-fighting at Microsoft. Brass wrote:

Internal competition is common at great companies. It can be wisely encouraged to force ideas to compete. The problem comes when the competition becomes uncontrolled and destructive. At Microsoft, it has created a dysfunctional corporate culture in which the big established groups are allowed to prey upon emerging teams, belittle their efforts, compete unfairly against them for resources, and over time hector them out of existence. It’s not an accident that almost all the executives in charge of Microsoft’s music, e-books, phone, online, search and tablet efforts over the past decade have left.

Brass predicted that unless Microsoft was able to overcome this dysfunctional corporate culture and regained “its creative spark” that it might not have much of a future. In highlighting Brass’ piece, Steve wrote in his tweet:

Every behavior that’s killing Microsoft, I’ve seen at a newspaper company. http://bit.ly/9W30W8

Reblog this post [with Zemanta]

Generosity and post-scarcity economic media models: Why I love participatory culture

One of the stumbling blocks for media companies looking to create sustainable digital business models is that the economic models differ in fundamental ways from the predominant models of the 20th Century.

Look at the media models of the 20th Century, and they are all based to some extent on scarcity and monopoly. Printing presses are expensive and create an economic limit to the number of newspapers that any given market will support. Satellites are incredibly expensive. Cable television infrastructure is expensive. Scarcity leads to the development of stable, de facto monopolies. Sky dominates satellite television in the UK. Cable television providers are usually granted monopolies in all but the largest of cities. Again, in all but the largest markets, newspapers have come to enjoy a monopoly position. (It is why I find it a bit rich that media monopolies are railing against Google. Monopolists trying to use the law and courts to defend their position against a rising monopolist should be the plot for a farce. Why don’t we create a web television series?)

The internet is different because media companies don’t have monopoly control over the means of distribution. News International and Gannett don’t own the presses that power the internet. BSkyB doesn’t own the satellites. Comcast owns the last mile of copper, but much of the internet is beyond its control.

The cost of media production has also dramatically decreased allowing people to create media with motivations that are not economic, which seems insane and alien to people who make a living creating media. However, creating media and sharing it with others is key to many communities online. Note, I’m talking about people sharing the media that they create, not sharing media created by people whose motivations are economic. Why the distinction? Sharing is a loaded term to the ‘creative industries’ which they want to redefine as theft. I’m not talking about sharing their content.

For those who don’t understand the “culture of generosity” on the internet, please read Caterina Fake’s moving defence of participatory culture. Caterina was one of the co-founders of photo sharing site Flickr and launched “a collective intelligence decision making system” called Hunch last year. Drawing on examples from her own experience going back to 1994, she explains why:

people do things for reasons other than bolstering their egos and making money

That’s about as foreign as one can think to mass media culture. Not doing something for ego or money? Why bother?

I can tell you why I bother. A global culture of participation has been, for me, key in meeting one of Maslow’s hierarchy of needs: Belonging. Originally participatory culture was something I did in my spare time because their was no place for it in my professional work, but co-creation in journalism has been one of the most richly rewarding aspects of my career.

This is a mental bookmark for a much longer post looking at the economics of post-scarcity media, something I’ve been thinking about after meeting Matt Mason, author of The Pirate’s Dilemma. I first met Matt when I chaired a discussion about his book at the RSA, and I interviewed him for the Guardian’s Tech Weekly podcast about piracy, copyright and remix culture. Matt said that we need more study of “post-scarcity economics”, something  not seen in real-world goods but definitely in the virtual world of digital content.

Reblog this post [with Zemanta]

Journalists: Belittling digital staff is not acceptable

Patrick Smith, recently of paidcontent.co.uk, has a post about the economics of regional newspapers in the UK and he makes the case (again) that the challenges facing British regional newspapers come down quite simply to economics.

This is not about the quality of journalism – this is about economics: The web is simply more effective for advertisers – Google ads are more effective and have less wastage than an ad in the Oxdown Gazette, no matter how good the editorial quality of the paper is.

In the post, he quotes “Blunt, the pseudonymous author of the Playing the Game: Real Adventures in Journalism blog” who defines a “Web Manager” as:

An expert in cut and paste. Probably a journalist but not necessary.

My issue isn’t with Blunt. Let’s be honest with ourselves, this is a sadly typical comment in the industry regarding digital staff. It’s not even new. I’ve heard comments like this for most of my 16-year career. During this Great Recession, I can understand psychologically and emotionally where they come from: It’s an anxious time for journalists, all journalists, regardless of medium or platform.

The digitally focused staff are working just as hard to preserve professional journalism as those staff still focused on print. I have spent most of my career developing unique digital skills while producing content for broadcast and print. I have often felt that I had to work harder than traditional journalists to prove that I’m not just an ‘expert in cut and paste’. I work very hard to know my beats, work across platforms and produce high quality journalism that meets or exceeds the industry standards of print, broadcast and web journalism. I am not the only digital journalist who puts this sort of effort in. Yet the industry is still rife with the same anti-digital prejudice I witnessed ten years ago.

It’s long past time for senior figures in journalism to publicly state that demeaning digital staff is not acceptable. Here are a few basic facts about digital journalism:

  • I use a computer for much of my work. That doesn’t mean I’m a member of the IT staff.
  • I know about technology. That doesn’t mean that I’m incapable of writing.
  • My primary platform is digital. That doesn’t mean my professional standards are lower.

Prejudice towards digital journalists needs to stop. It sends a message to digital journalists that they are unwanted at a time when their skills are desperately needed by newspapers. Digital staff should not be the convenient whipping women and men for those angry and upset about economic uncertainty in the industry.

There is nothing totemic about print and paper that makes the journalism instantly better or more credible. Quality broadsheets are printed on paper just as sensationalist tabloids are. Let’s measure journalists not by the platform but by their output.

Ushahidi and Swift River: Crowdsourcing innovations from Africa

For all the promise of user-generated content and contributions, one of the biggest challenges for journalism organisations is that such projects can quickly become victims of their own success. As contributions increase, there comes a point when you simply can’t evaluate or verify them all.

One of the most interesting projects in 2008 in terms of crowdsourcing was Ushahidi. Meaning “testimony” in Swahili, the platform was first developed to help citizen journalists in Kenya gather reports of violence in the wake of the contested election of late 2007. Out of that first project, it’s now been used to crowdsource information, often during elections or crises, around the world.

What is Ushahidi? from Ushahidi on Vimeo.

Considering the challenge of gathering information during a chaotic event like the attacks in Mumbai in November 2008, members of the Ushahidi developer community discussed how to meet the challenge of what they called a “hot flash event“.

It was that crisis that started two members of the Ushahidi dev community (Chris Blow and Kaushal Jhalla) thinking about what needs to be done when you have massive amounts of information flying around. We’re at that point where the barriers for any ordinary person sharing valuable tactical and strategic information openly is at hand. How do you ferret the good data from the bad?

They focused on the first three hours of a crisis. Any working journalist knows that often during fast moving news events false information is often reported as fact before being challenged. How do you increase the volume of sources while maintaining accuracy and also sifting through all of that information to find the information that is the most relevant and important?

Enter Swift River. The project is an “attempt to use both machine algorithms and crowdsourcing to verify incoming streams of information”. Scanning the project description, the Swift River application appears to allow people to create a bundle of RSS feeds, whether those feeds are users or hashtags on Twitter, blogs or mainstream media sources. Whoever creates the RSS bundle is the administrator, allowing them to add or delete sources. Users, referred to as sweepers, can then tag information or choose the bits of information in those RSS feeds that they ‘believe’. (I might quibble with the language. Belief isn’t verification.) Analysis is done of the links, and “veracity of links is computed”.

It’s a fascinating idea and a project that I will be watching. While Ushahidi is designed to crowdsource information and reports from people, Swift River is designed to ‘crowdsource the filter’ for reports across the several networks on the internet. For those of you interested, the project code is made available under the open-source MIT Licence.

One of the things that I really like about this project is that it’s drawing on talent and ideas from around the world, including some dynamic people I’ve had the good fortunte to meet. Last year when I was back in the US for the elections, I met Dave Troy of Twittervision fame who helped develop the an application to crowdsource reports of voting problems during the US elections last year, Twitter Vote Report. The project gained a lot of support including MTV’s Rock the Vote and National Public Radio. He has released the code for the Twitter Vote Report application on GitHub.

To help organise the Swift River project for Ushahidi, they have enlisted African tech investor, Jon Gosier of Appfrica Labs in Uganda. They have based Appfrica Labs loosely on Paul Graham’s Y Combinator. I interviewed Jon Gosier at TEDGlobal in Oxford this summer about a mobile phone search service in Uganda. He’s a Senior TED Fellow.

There are a lot of very interesting elements in this project. First off, they have highlighted a major issue with crowdsourced reporting: Current filters and methods of verification struggle as the amount of information increases. The issue is especially problematic in the chaotic hours after an event like the attacks in Mumbai.

I’m curious to see if there is a reputation system built into it. As they say, this works based on the participation of experts and non-experts. How do you gauge the expertise of a sweeper? And I don’t mean to imply as a journalist that I think that journalists are ‘experts’ by default. For instance, I know a lot about US politics but consider myself a novice when it comes to British politics.

It’s great to see people tackling these thorny issues and testing them in real world situations. I wonder if this type of filtering can also be used to surface and filter information for ongoing news stories and not just crises and breaking news. Filters are increasingly important as the volume of information increases. Building better filters is a noble and much needed task.

Reblog this post [with Zemanta]