US law and comments on websites

David Ardia, on legal liability for comments online from Nieman Journalism Lab on Vimeo.

David Ardia, director of the Citizen Media Law Project at Harvard, talks about CDA 230, the section of the Communications Decency Act that provides some protection to people who run web sites.

Joshua Benton from the Nieman Journalism Lab at Harvard University says:

I wish every managing editor in the country could see this 20-minute video. I’ve heard so many misconceptions over the years about news organizations’ legal ability to police, manage, or otherwise edit the comments left on their web sites. They say “the lawyers” tell them they can’t edit out an obscenity or remove a rude or abusive post without bringing massive legal liability upon themselves — and that the only solutions are to either have a Wild West, anything-goes comments policy or to not have comments in the first place.

That’s not true, and hasn’t been true since 1996.

Android, e-ink and live news displays

Android Meets E Ink from MOTO Development Group on Vimeo.

Motorola Development Group is showing off a proof of concept with Google’s Android running an e-ink display. With Amazon’s Kindle showing some signs of success, it looks like e-readers might finally be reaching a tipping point in terms of adoption. What I find interesting in terms of not only the Kindle but also this proof of concept is the delivery of content wirelessly.We’re starting to see experimentation in terms of form factor for these devices. We’re not just talking about laptops, netbooks and mobile phones.

With the cost of printing the New York Times roughly twice as much as sending every subscriber a free Kindle, there might be a point where wireless delivery to an electronic reading device makes economic sense. This is very speculative and very much out in front of the market and most consumers, but as Nicholas Carlson points out:

What we’re trying to say is that as a technology for delivering the news, newsprint isn’t just expensive and inefficient; it’s laughably so.

Print is always cast in terms of habit. The argument is that people prefer the tactile experience of the printed page and the easily browsable format, but with the economics of print news delivery becoming financially untenable, it’s worth seeing what options are available and what options are developing.

Guardian election road trip review: Geo-tagging


View Larger Map

With the inauguration of Barack Obama as president of the United States now well behind us, I thought I’d take (a long overdue) look back at the road trip that I took during the US elections for The Guardian and talk about some of the things we tried in terms of innovations in coverage and what I learned from it.
This is the third trip that I’ve taken for the US elections. In 2000, I took a trip with BBC Washington correspondent Tom Carver. Webcasts were the thing of the day, and we took a portable satellite dish and a DV video camera to webcast live or as-live (recorded but treated as live). We answered a range of questions covering topics suggested by our global audience. In 2004, I took another trip with BBC News Online colleague Richard Greene. The trip was my introduction to blogging, and it set the path for my career for the last five years.

The common thread through all of these trips has been an attempt to engage the audience in new ways and field test new digital journalism techniques. Over a series of posts I’ll talk about some of the things that we did for US Election trip 2008.

Geotagging

As I mentioned last summer, one of the things that I wanted to try was geo-tagging. I was inspired by the GPS and geo-tagging function in my Nokia N82 to add this to our coverage. The camera in the N82 is stellar. With a 5 megapixel sensor and a brilliant Xenon flash, it is one of the best features in the phone. (I’d be interested in seeing what the new N85 has to offer, apart from the OLED screen. ZDNet has a review.) I’m going to focus on geo-tagging in this post and talk more about mobile newsgathering with the N82 and other smartphones in another post.
As good as the camera is on the N82, I knew that there would be times when I needed Suw’s Nikon D70, a proper D-SLR with interchangeable lenses. But how to add the geo-data? Dan Chung, award-winning photographer and digital innovator at the Guardian, and I had played around with a geo-tagging device from Sony, the GPS-CS1.

A geo-tagger at its most basic has a GPS radio and some memory. It records your location either every so often or after you move a certain distance. It’s not physically connected to the D-SLR in any way, but it does require you to sync the clock from the geo-tagger with the clock in your D-SLR. To add the geo-data to your photos, all you have to do is import the photos to your computer and import the GPS logs from your geo-tagger. Software then compares the time that the photo was taken with your GPS logs and merges the geo-data into the EXIF files of the photos. Newer high-end cameras such as the D200 have GPS add-on units (the GP-1), and point-and-shoot cameras like the P6000 have integrated GPS.

Dan had me test the Song geo-tagger a couple of years ago, and I wasn’t that impressed. It didn’t acquire the satellites very quickly, and Sony didn’t officially support non-Sony cameras. But although the accuracy wasn’t brilliant, the idea is sound.
I looked around and settled on GiSTEQ CD110BT. It has a sensitive MTK chipset with 51-channel tracking, and I found the accuracy to be frighteningly good. The GPS track plotted on Google Earth actually shows when I changed lanes in my rental car. The Sony could take minutes to acquire the satellite, but from a cold start, the GiSTEQ usually got a lock in less than a minute. A synthesised voice says “Satellites fixed” when it’s got a lock. To conserve power, it will shut itself off but wake when moved or vibrated. I carried it around my neck on a lanyard or in the pocket of my camera bag when I was out and about. A supplied light adhesive patch kept it on my dashboard while driving. The unit also comes with both mains (AC) and car chargers.

That’s the good. The bad is that while GiSTEQ says CD110BT will work on PCs and Macs, mine didn’t out of the box. It required a firmware update to work with a Mac, and the firmware updater only works on PCs and didn’t like Windows XP running on Parallels virtualisation software. Fortunately, my friend Andy Carvin at NPR gave me five minutes on his PC to update the firmware, but even after that, I had difficulty getting the device to consistently download data. GiSTEQ has since released a new update that they say fixes this. I downloaded some GPS logs tonight without a hitch.

I’d like to try the Amod AGL3080 (review in Wired), which is touted as a driverless geo-tagger. It simply mounts as an external drive on Mac or PC, and all you need to do is copy the data from it. It uses a highly accurate SiRF III chipset. Unlike the GiSTEQ which is charged via the USB cable, the Amod runs on three AAA batteries. Kevin Jaako has a thorough review of it on his blog.

The software that comes with the GiSTEQ promises a lot and delivers most of it without too much fuss. It’s actually rebranded software from JetPhoto, and as the company says on its site, you don’t actually need a specialised geo-tagger. There are several Garmin or Magellan GPS units that will work with it. The software also works quite nicely with the N82, instantly recognising that the photos already have geo-data embedded in the files. If the geo-data is off, the software has a nice interface to relocate and update the geo-data. It also has a built-in Flickr uploader, although it could be a bit more intuitive and work more seamlessly with Flickr title and description fields.
But I didn’t just geo-tag my photos. I also geo-tagged my tweets using Twibble, a geo-aware Twitter app Nokia S60 phones. Twibble integrates seamlessly with the GPS on the N82. It also allows you to upload pictures you’ve taken with the phone directly to TwitPic. We just used this all to great effect for Guardian Travel’s first TwitTrip with Benji Lanyado. It is pretty heavy on the battery, but I had a power inverter in the car so everything was fully charged all the time. It was also a bonus to have Nokia and Google Maps on the phone for navigation.
I also geo-tagged all of my blog posts. I either took the geo-data from a Tweet or a photo, or if I didn’t have any geo-data handy, I used sites like Geo-tag.de or Tinygeocoder.com to generate geo-data from an address.
Visualising the trip
Thanks to a quick bit of python scripting by Guardian colleague Simon Willison, I have a KML file for all of the 2059 photos that I took over the more than 4000 miles of the trip. One of the reasons that I wanted to geo-tag pictures, posts and tweets was that while I know most of these towns, I wanted to give a global audience a sense of place.

But apart from easily visualising the trip, why all the fuss to do this? Adding geo-data to content is one of those fundamental enabling technological steps. It opens up a world of possibilities for your content. By geo-tagging your content, it allows users to subscribe to content based on location. Geo-tag your movie and restaurant reviews, and you can start leveraging emerging location-based services on mobile phones. With Google Maps on mobile and other mapping services, news organisations could provide real-time location based information. Geo-data allows users to navigate your content by location instead of more traditional navigation methods.
Some companies are already dipping their toes into geo-data. Associated Press stories hosted on Google News have a small inset Google Map inset based on the location information in the dateline. New York Times stories appear on Google Earth. But datelines are imprecise because they are city-based, but when you pull up more accurate data you can do much more. You can see the possibilities of mapped information on Everyblock.com.
But to get from most news sites to Everyblock, you’ve got to put in the foundational work both on the technical side and the journalistic workflow. Having said that, it’s not rocket science. It might seem a lot of work up front, but once the work is done, geo-data provides many opportunities, some of which could provide new revenue streams.

DEN: Eric Ulken: Beyond the story-centric model of the universe

After appearing virtually at a few Digital Editors Network events at the University of Central Lancashire in Preston, I finally made the trip to appear in person. I really enjoyed Alison Gow talking about live blogging the credit crunch for several Trinity-Mirror sites using CoverItLive.

Eric Ulken, formerly the LATimes.com editor of interactive technology, spoke about an issue dear to my heart: Moving beyond the story as the centre of the journalism universe. It’s one of the reasons that I chose to be a digital journalist is that I think it brings together the strengths of print, audio and video while also adding some new story-telling methods such as data and visualistions. Eric talked about the projects he worked on at the Times to explore new ways of telling stories.

Eric started off by talking about the history of news articles.

The story article so far

  • born 17th Century
  • served us well for about 400 years
  • lots of words (800-1000 words on average)
  • unstructured, grey and often boring.

“What else is there in the toolbox?” he asked.

Some examples: (Eric suffered the dreaded no internet, links in presentation problem so am a little link light on this. You can see examples that Eric has worked on from his portfolio.)

  • text trick – lists, tables, timelines, (Eric mentioned Dipity as one way to easily create a timeline, but said it was “not quite there”. He also mentioned MIT’s Simile project (which has ‘graduated’ and is now hosted on Google Code). Licenced for use under BSD licence, it’s is easily something for more news organisations to use.) Other text formats include the q&a and what he called the q&no, eg the New York Time tech blog. They put up questions for Steve Jobs before MacWorld. His Steve-ness never answers them, but it lays out the agenda.
  • blogs are the new articles
  • photo galleries as lists, timelines
  • stand-alone UGC
  • video: short-form, packages
  • mapping, charts, data visualisation
  • database applications visualisation.

I think this is really important for journalists to understand now. They have to be thinking about telling stories in other formats than just the story. Journalist-programmer ninja Adrian Holovaty has a number of ways that stories can be re-imagined and enhanced with structured data. News has to move on from the point where the smallest divisible element of news is the article. News organisations are adding semantic information such as tags, as we have at the Guardian.

But beyond that, we have to think of other ways to present information and tell stories. As more journalists shift from being focused solely on the print platform to multi-platform journalism, one of the most pressing needs is to raise awareness of these alternate story-telling elements. Journalists, outside of the development departments and computer-assisted reporting units, need to gather the data around a story. It needs to become an integral part of newsgathering. If a department inside of your organisation is responsible with gathering this data, your data library needs to be made accessible and easily searchable by journalists. If it sounds daunting, especially for small shops, then use Google Docs as an interim solution. This is also an area ripe with opportunities for cooperation between universities and news organisations.

Eric gave one example of this non-story-centric model for news. “We did a three-way mashup”, he said. They brought together the computer-assisted reporting team, the graphics team and Eric’s team.

They worked with a reporter on the City desk. She wanted to chronicle every homicide in LA County. In 2007, there were 800 murders. She did the reporting in a blog format. It might not have been the best format, but it was easy to set up. She started building up a repository of information. I was begging people to get the tech resources to build a database. We built a database on top of the blog. We took data from the County Coroner. We took gender, race and age and put it in a database which was crossed linked to the blog. We added a map. You could filter based on age or race on the map. The result was two things. It was a way to look at the data in aggregate, and it was a way to drill down through the interface to the individual record. They took public data, original reporting and contributions from users.

“One of the things that is challenging is getting the IT side to understand what it is actually that you do,” he said.There are more tech people who are interested in journalism probably than there are journalist who are able and willing to learn the intricacies of programming.

When the floor was opened to questions, I wasn’t surprised that this one came up.

Question: Could the LATimes get rid of the print and remain profitable?
Answer: No. Revenue from online roughly covers the cost of newsroom salaries, not the benefits, not for ad staff. I don’t think he was saying that the LATimes had figured it out. He had been saying that for some time before he said it publicly. It was for morale. He was saying that it is not inconceivable for the website to pay in the future.
“There is a point where this cycle ends of cutting staff and cutting newshole,” he said.

UPDATE: And you can see the presentation on SlideShare:

Commenting on public documents

I was impressed by the Writetoreply.org idea to post the Digital Britain interim report on a Comment-Press installation to allow people to comment on it. You can read some of the background to the project from Tony Hirst, who flagged this up on the BBC Backstage list. It really ticks a lot of public service boxes for me, and I think this is something that journalism oganisations could and should do. Hats off to Joss Winn for putting this together.

This is just the latest example of posting public documents for public comment. Gavin Bell did this with the European Constitution, and the Free Software Foundation hosted an amazing project that allowed people to comment on the GNU General Public Licence version 3. A heatmap showed down to the word level the parts of the document that were generating the most comment, and it had a very intuitive interface.

Out of the GPL project grew a service called Co-ment. I was able to grab a copy of the report, convert it to RTF and upload it. The basic level of service only allows 20 people to comment on it, and this is just my cut-and-paste coding proof of concept. If you’d like to comment, drop me an e-mail, and I’ll add you to the list, bearing in mind that I only have 20 slots available. But the public service journo-geek in me loves stuff like this.

Have a play. I’d like to see how this works. It’s already got a lot of ideas flowing.

Washington Post’s TimeScale feature for Obama’s inauguration

I’ve been blogging about some of the new ways that news websites covered the inauguration and collaborated with their audiences. One interesting presentation was a feature called TimeSpace at the Washington Post. Simon Willison and I were taking a look at the guts of this, and it appears that they have built a platform that will allow them to quickly build features like this in the future.

TimeSpace one of those things that I do wonder who will use this and whether it was promoted enough on the front page. I like it, but I am always keen to see if this is something that appeals to a wider audience. I do believe that the geo-tagging elements that they have added have a wider application. The inauguration really highlighted a lot of new ways of showing this historic event. I’ll be interested to see what projects were one-offs and which ones have staying power.

Obama’s Transition team talks Technology, Innovation and Government Reform

President Barack Obama has already begun his first day in office, but it’s interesting to look back at his transition, which has won praise in Washington as one of the most organised and disciplined in history. During his transition, he launched a website, change.gov to not only outline his policies but also to seek input. This video is worth watching. Technology and change is challenging for any organisation whether you’re in the business of governing a country or running a news service. Obama’s Technology, Innovation and Government Reform talk about how they faced these challenges.

How to run a news organisation in a down economy

The year has started out with more hand wringing about the predictable (and predicted), but very dire, economic situation of newspapers, particularly in the US. News organisations’ belief that quality will be their saviour is usually the result of projections of their own information consumption patterns and standards for quality on their audience, motivations that their audience may or may not share.

Newspapers are not maintaining the audiences necessary to support their current costs. Steve Yelvington just wrote this post about the bad news for newspapers and rays of hope, which is a comment that he left on Jeff Jarvis’ list of newspaper bad news from 2008:

At the core, it’s not an advertising problem. Local businesses still need to reach potential local customers, and they’re willing (although certainly not eager) to pay for results.

It’s primarily a failure to attract and retain a commercially relevant audience that’s breaking the newspaper business model.

I agree with Steve that multi-platform, multi-revenue stream businesses are necessary to survive, and many publications are in the process of the wrenching change required to achieve that.

But there is another, equally important, way to make the necessary change for news organisations looking to survive in this very challenging economic environment, and that is to disrupt their own costs (and I don’t mean cutting head count even further). While some blame digital technology and the internet for the death of newspapers, I would argue that embracing disruptive digital technology could lead to substantial cost savings.

Off the shelf, pro-sumer gear straddles the line between consumer and professional kit but costs substantially less. Open source software can extend the life of aging computers in the office, can run the servers and handle most CMS functions. Open-source content-management systems might not be ready for the largest sites, but most small- to medium-sized news sites could easily use Drupal or WordPress for their entire site. In the hands of a competent contractor with the occasional tweaking from a third-party vendor, the site will easily cope with moderate traffic.

I even think there is a possible radical model where there is a small office that handles core administrative and sales functions but the journalists are by and large dispersed, tele-commuting as much as possible. They would work as close to the story and their sources as possible and file remotely. They can use Skype or IM to communicate with their managers, and Twitter-like service Yammer to keep in touch with each other and help prevent a sense of isolation. Maybe I’m advocating this because as a journalist who worked in a foreign bureau and often out in the field for several years, this type of working seems natural to me.

A lot of successful digital content businesses already work on this model, and I think that we’ll see more competition in this space from within the industry. In this downturn, digital outcasts made redundant by traditional news organisations will start their own boot-strapped news organisations, potentially pushing many of their former employers to the wall, unless the incumbents radically, not incrementally, remake themselves. It is only a matter of time. The digital disrupters will run very lean, digitally-focused businesses with multiple revenue streams, as Steve suggests.

For a model of the thinking that will drive this type of business, look to this post by Eric Ries HOW TO: Raise Money in a Down Economy on Mashable. He serves as a venture advisor for Kleiner Perkins Caufield & Byers and talks about trying to raise money for a venture in 2004, when scepticism remained after the dot.com crash. His advice is:

The most important thing you can do to improve your chances of raising money in a down economy is to build a great company. A great startup is more than just a miniature version of a great large company. All of its process should be focused on innovating and learning. Today, it’s possible to use a combination of free and open source software, community-generated content, and agile software development to bring new products to market with extremely low cost.

Add professionally created and curated content and apply this model for an innovation-led business, and you’ll find a way out of this perfect storm affecting the newspaper industry. It’s eerily similar to the Newspaper Next project recommendations for good reason.

However, I ask those of you toiling in the industry right now. How close is this disruptive way of doing business to the environment at your news organisation?

  • Is your company focused on learning and innovation?
  • To what extent is your company using free and open-source software?
  • Is your company focused on delivering information while cutting costs?
  • Is your company looking for new ways to partner with and build new relationships with your audience?

Cutting costs doesn’t just have to happen through job cuts. Companies need empower their people to work smarter, spend money more wisely, and focus on doing more with less. There are many ways to achieve this, and I think we’ll see experimentation and innovation this year as the economic crisis deepens. Necessity will be the mother of re-invention.

“OK open systems beat great closed systems every time”

The title of this post is a quote, via Steve Yelvington, from Prodigy’s Vice President of marketing around the time that the Web arrived and changed the online game. Usually I just reference links like this in Delicious, but Steve’s post Early to the game but late to learn how to play needs a little more attention.

In the current business climate for newspapers, Steve brings a wealth of experience and history that few folks in the industry have and, as he points out, it is not that newspaper didn’t try to adapt but that they tried to adapt the web to their existing business rather than adapting for the web. Newspapers tried to keep their closed systems as they moved online, locking their content in online services. The web might have arrived ‘pathetic and weak’ but it was ‘open and extensible’, says Steve, and it eventually buried online services like Prodigy, Compuserve and even AOL. He quotes Jack Schafer from a Slate piece titled “How Newspapers Tried to Invent the Web:”

From the beginning, newspapers sought to invent the Web in their own image by repurposing the copy, values, and temperament found in their ink-and-paper editions.

I’ve long fought against the re-purposing reflex of shovelware, mindlessly slapping content from another medium onto the web. As we move to integrated newsrooms, we’re often still treating the web as just another distribution channel that simply has to be optimised for Google. Here is why it isn’t. To quote Steve:

Many of us who were there at the time knew that human interaction, not newspaper reading, would be the most powerful motivator of online usage. Certainly I knew it; I had run a dialup bulletin board for years as a hobby. But as hundreds of newspapers rushed to “go online,” few even bothered to ask basic questions about content strategy. It was, many declared as of they were saying something wise, “just another edition.”

But it’s not.

If human interaction is the ‘killer app’ of the internet, which I agree with Steve it is, how would this make a news site different? It is only in the so-called Web 2.0 era that we finally started adding social elements into our news web sites. And if human interaction is primary motivator of online usage, can we as journalists fail to interact and still hope to remain relevant? Open systems are not just about a choice of technology. The philosophy of open systems is also about how we use technology. Open is a philosophy that drives us to use technology to bolster human interaction. It is why Steve talks about the mission statement of his news site as being to increase the social capital in the communities Morris serves.

Jay Rosen has been doing a lot of thinking about closed versus open editorial systems, and he characterised this comment as one of his clearest comparisons yet of the two systems:

The strength of a closed system is that it has controls, in same sense that an accounting system puts controls in place. Stories are assigned, reported, edited and checked (copy edited) by a team using a protocol, or newsroom standard. These are the hallmarks of the closed system. The controls create the reliability, right?

Versus:

Open systems take advantage of cheap production tools and the magic distribution system of the Web. This leads to a flood of “cheap” production in the blogosphere, some of which is valuable and worth distributing in wider rings, much of which is not. Thus, a characteristic means of creating value online is what I called the intelligent filter to do that sorting and choosing.

If you look at successful open systems, they don’t try to prevent “bad,” unreliable or low quality stories from being created or published. They don’t try to prevent the scurrilous. But the Los Angeles Times would. Typically, successful sites within open systems “filter the best stuff to the front page.” And this is how they try to become reliable, despite the fact that anyone can sign up and post rants.

That way of creating trust (or reliability) is different than the way a closed system–like the health team at Time magazine–does it. Therefore the ethics will be different.

And he talks about hybrid systems, which is where I think some of the most interesting work is going on. We live in an AND world not an OR world, and I fear sometimes journalists’ tendency to paint the world in black and white infects our approach to our own way of working.

For me, I don’t use technology simply because I’m neophilic. I use it because it helps me do better journalism, in a way that is more useful to people in my network, or as Dan Gillmor says, the people formerly known as the audience. The internet as an open system means that my methods aren’t a fixed destination but an ever evolving, extensible process that adapts as the network changes, whether I conceive of the network in terms of the technology or the people I’m interacting with. Through all this my core journalistic values and ethics haven’t changed. That’s the constant.

I’m feeling a little philosophical at the start of the New Year. I am an online journalist. If the road trip I took for the US elections reminded me of anything, it reminded me of the power of networked journalism, which in terms of both the technology and the human connectedness increases almost constantly. Let’s just look at the expanding reach of mobile phones and data. In 1999, I got my first mobile modem and started to be freed from my desk. It ran at 9600 baud, slow even then. In 2009, I used a DSL-class mobile network card, and when I was on the move, I used a Nokia N82, which like the iPhone and Blackberry, allowed me to continue to use key internet services like Twitter, Flickr and Facebook. The network is not only mobile, it is on my mobile.

Open systems are a huge opportunity for journalists, not a threat to our professional livelihoods. We journalists don’t have to limit ourselves to closed systems, we have a vast range of open systems that can support and improve our work. I know that 2008 ended with a lot of anxiety for many journalists, much of it from a sense that our professional lives were out of our control. But by embracing the network, you can start taking back control of your professional destiny.

Wish list for better tools for journalism

I still like Twhirl for my personal Twitter-ing and Twibble for my mobile Twitter-ing, but I think TweetDeck is a stellar tool for Twitter power users including journalists. I keep it open on my desktop and occasionally look at the tag cloud from TwitScoop. Recently, I saw ‘Bethesda’ pop up in huge type on the tag cloud, and I was baffled as to why this Washington DC suburb should be spiking on Twitter. But the tweets linked to the story about a huge water main break in Bethesda 20 minutes before it aired on British TV news networks.

When I showed TweetDeck to one of our news bloggers here at the Guardian, he said he wished that the news wires worked like that.

  • Why don’t we have a tag cloud showing rising stories in wire feeds?
  • Why don’t we create our own in house Adobe Air apps that automatically aggregate based on those tags from social media sources?
  • Why aren’t our publishing tools as fast and user-friendly as blogging tools?

In 2009, I see almost endless opportunities to use third party sites, applications and services to do social media journalism. My wish list will drive the apps and services I use. What’s your wish list for 2009? What tool do you use outside of your office that you wish you had inside your newsroom to do journalism?