Two projects to watch: Ben Franklin Project and TBD.com

TBD.com's Near You zip code news filter

TBD.com's Near You zip code-based news filter

At 428 am in Washington DC a new news site, TBD.com, launched, and it is definitely one worth watching. Why? They have assembled an all-star staff, brimming with passion. The general manager for the project is Jim Brady, the former executive editor and vice president of Washington Post Newsweek interactive. Steve Buttry, the site’s head of community engagement, has a long history in traditional journalism, training and innovation.  (For any journalist struggling to come to terms with the unrequited love you feel for the business, read this post by Mimi Johnson, Steve’s wife, as he left the newspaper business to go all digital at TBD.) They have some great staff who I have ‘met’ via Twitter including networked journalists Daniel Victor and Jeff Sonderman.

When he was hired, Jeff described his job as a community host as this:

developing ways to work with bloggers and users to generate, share and discuss content.

He described TBD.com as this:

Our goal is to build an online news site for the DC metro area, and do it taking full advantage of the how the web works — with partnership not competition, users not readers, conversation not dictation, linking not duplicating.

If you look on Twitter this morning, Jeff and Steve are very busy on their first full day as hosts for the new news service.

Digitally native at launch

The site is clean and clear, easy to navigate with a lot of excellent touches. TBD.com launched with an Android app and are awaiting approval for their iPhone application. They zip (post) code news filter to find out content not only from TBD but also from bloggers in the area is excellent. I lived in Washington from 1998 until 2005 as the Washington correspondent of BBCNews.com. I know the city well. I typed in my old home zip code, 20010, and got news about Mount Pleasant including from a blog called The 42 Bus, which was the bus that I used to take to work everyday. Their live traffic information is template for how city sites should add value for such bread and butter news. You can quickly pull up a map showing traffic choke points in the area. They even have a tool to plot your best travel route. The traffic tools are pulled from existing services, but the value is in the package.

They had a launch event last week, and they explained their networked journalism strategy. Steve Myers at the Poynter journalism institute said half of the links at TBD.com would point to external sources, much higher than at most sites. said that At launch, 127 local bloggers had joined their network. Steve Myers had this quote from Steve Buttry about their linking strategy:

“If we’re competing on the same story, we’ll do our story and we’ll link to yours,” said Steve Buttry, director of community engagement for the site. If another source owns a big story, “we’ll play you at the top of the home page and we’ll cover something else with our staff resources.”

Wow. Personally, I think that this is smart. With resources declining at most news organisations, they have to be much more strategic about how they use their staff. They need to focus on what value that they add. Jeff Jarvis says: “Cover what you do best and link to the rest“, and this is one of the highest profile tests of that strategy.

Ken Doctor, brilliant news industry analyst at Newsonomics, has 10 reasons to watch TBD.com. Harvard’s Nieman Lab for journalism has another six reasons why they are watching the launch. Of Ken’s list, I’ll highlight two. Bucking the trend for many new high-profile news projects in the US, this is a for-profit business. Ken’s seventh point is huge:

7) It’s got a big established sales force to get it going. Both TV stations salespeople with accounts — and relationships. So TBD is an extension of that sales activity, not a start-up ad sell, which bedevils many other  start-ups.

The other thing that TBD.com has going for it is that it has the commitment of someone who already has seen some success with new models, Robert Allbritton. A few years ago, he launched Politico.com, bringing in two high profile veterans from the Washington Post to compete not only with their newspaper but also specialist political outlets like Roll Call. Politico has managed to create a successful print-web product, “not profitable every quarter but says it’s turning a profit for any given six months,” Allbritton told paidContent.org. What is more important though is his commitment to his ventures. He’s got the money and commitment to support projects past the short term.

“The first year of Politico was pretty ugly in terms of revenue,” he admitted. “You’ve got to have some staying power for these things to work.”

The Ben Franklin Project

The other project that I’m watching is John Paton’s Ben Franklin Project at the Journal Register Company. What is it?

The Journal Register Company’s Ben Franklin Project is an opportunity to re-imagine the newsgathering process with the focus on Digital First and Print Last. Using only free tools found on the Internet, the project will – from assigning to editing- create, publish and distribute news content on both the web and in print.

Succinctly, this company is looking to disrupt its own business. Instead of attacking costs by cutting more staff, they are looking to cut costs by eliminating the price of their own production using free tools. It’s not something that every organisation could do, but with 18 daily newspapers and 150 non-daily local publications, it shows the ambition of their project. This is not a tiny organisation.

In practice, the organisation set the goal for all 18 of its newspapers to publish online and in print using free online and free open-source tools, such as the Scribus desktop publishing application. They are also pursuing the same kind of community engagement, networked journalism strategy that is at the heart of TBD.com.

On 4 July, 2010, Independence Day in the US, they published their 18 daily newspapers and websites only using free tools and crowdsourced journalism. Jon Cooper, Vice President of Content, Journal Register Company wrote:

Today — July 4, 2010 — marks not only Journal Register Company’s independence from the costly proprietary systems that have long restricted newspapers and news companies alike. Today also marks the start of a revolution. Today marks the beginning of a new path for media companies whose employees are willing to shape their own future.

This is just part of Paton’s turnaround strategy for the Journal Register Company. However, in 2010, which is proving to be another tough year for the US economy (especially in some of the areas the company covers), Paton just announced that the company is 15% ahead of its revenue goals. He said:

Our goal is to pay out an extra week’s pay this year to all employees for hitting our annual target of $40 Million.

That is an amazing investment in journalists and an incentive for them to embrace the disruptive change he is advocating, but it’s so heartening to see journalists engaged and benefitting from change in the industry.

With all the talk about innovation in journalism, it is rare to see projects launch with such clear ambitions. After a lot of talk in the industry, we’ll now see what is possible.

APIs helping journalism “scale up”

A couple of days ago, I quoted AOL CEO Tim Armstrong on developing tools to help journalists “scale up” what they do. ?In a post on Poynter’s E-Media Tidbits, Megan Garber has a highlighted a good practical example of what I meant .

One thing that computers and other technology can help journalists to work more efficiently is to cut down or eliminate frequent, repetitive tasks. Derek Willis at the New York Times talks about APIs (as Derek describes APIs as “just a Web application delivering data). Derek says:

The flexibility and convenience that the APIs  provide make it easier to cut down on repetitive manual work and bring new ideas to fruition. Other news organizations can do the same.

Derek also points how savvy use of data is not just good for data visualisations and infographics, but it is also an excellent resource for New York Times’ journalists.

So if you have a big local election coming up, having an API for candidate summary data makes it easier to do a quick-and-dirty internal site for reporters and editors to browse, but also gives graphics folks a way to pull in the latest data without having to ask for a spreadsheet.

And as he said, the biggest consumer of New York Times APIs is the New York Times itself.

Projects such as building an API can be quite large (although new companies and also organisations like the Sunlight Foundation in the US and MySociety in the UK have great public service APIs and data projects), but with the benefits to both audiences, designers, developers and journalists, it makes it easier to justify the time and effort.

Opportunities from the data deluge

There are huge opportunities for journalism and data. However, to take advantage of these opportunities, it will take ?not only a major rethinking in the editorial and commercial strategies that underpin current journalism organisations, but it will take a major retooling. Apart from a few business news organisations such as Dow Jones, The Economist and Thomson-Reuters, there really aren’t that many general interest news organisations that have this competency. Most smaller organisations won’t be able to afford it on an individual level, but it leaves room for a number of companies to provide services for this space.

Neil Perkin outlines the challenge and the opportunity in a wonderful column that he’s cross-posted from Marketing Week. (Tip of the blogging hat to Adam Tinworth, who flagged this up on Twitter and on his blog.) In our advanced information economies, we’re generating exabytes of data. While we’re just getting used to terabyte disk drives, this is an exabyte:

1 EB = 1,000,000,000,000,000,000 B = 1018 bytes = 1 billion gigabytes = 1 million terabytes

To put this in perspective, I’ll use an oft-quoted practical example from Caltech researcher Roy Williams. All the words ever spoken by human beings could be stored in about 5 exabytes. Neil quotes Google CEO Eric Schmidt to show the challenge (and opportunity) that the data deluge is creating:

Between the dawn of civilisation and 2003, five exabytes of information were created. In the last two days, five exabytes of information have been created, and that rate is accelerating.

All the words spoken since the dawn of language in 5 exabytes or the amount of information created in the last two days helps illustrate the acceleration of information creation. Those mind-melting numbers wash over most people, especially in our arithmophobic societies. However, there is a huge opportunity here, which Neil states as this:

The upside of the data explosion is that the more of it there is, the better digital based services can get at delivering personal value.

And journalists can and definitely should play a role in helping make sense of this. However, we’re going to have to overcome not only the tyranny of chronology but also the tyranny of narrative, especially narratives that prejudice anecdote over data. Too often to sell stories, we focus on outliers because they shock, not because outliers are in any way representative of reality.

From a process point of view, journalists are going to need to start getting smarter about data. I think data crunching services will be one way that journalism organisations can subsidise the public service mission that they fulfil, but as I have said, it’s a capacity that will need to be built up.

Helping journalists ‘scale up what they do’

It’s not just raw data-crunching that needs to improve, but we’re starting to see a lot of early semantic tools that will help more traditional narrative-driven journalists do their jobs. In talking about how he wanted to help journalists at AOL overcome their technophobia, CEO Tim Armstrong talked about why these tools were necessary. Journalists have not been included in corporate technology upgrades (and often not included in creation of tools for their work). Armstrong said at a conference in June:

Journalists I met were often the only people in the room who never had access to a lot of info, except what they already knew.

It’s not technology for technology’s sake but tools to open up more information and help them make sense of it. Other industries have often implemented data tools to help them do their jobs, but it’s rare in journalism (outside of computer-assisted reporting or database journalism circles). Armstrong said:

You can pretty much go to any professional industry, and there’s some piece of data system that helps people scale what they do.

Journalists are being asked to do more with less as cuts go deep in newsrooms, and we’re going to have to work smarter because I know that there are some journalists now working to the breaking point.

There have been times in the last few years when I testing the limits of my endurance. Last summer, filling in behind my colleague Jemima Kiss, I was working from 7 am until 11 pm five days a week and then usually five or six hours on the weekends. I could do it for a while because it was a limited 10-week assignment. Even for 10 weeks, it was limiting the amount of time I had with my wife and was negatively affecting my health.

I’m doing a lot of thinking about services that can help journalists deal with masses of information and also help audiences more easily put stories into context. We’re going to need new tools and techniques for this new period in the age of information. The opportunities are there. Linked data and tools to analyse, sort and contextualise will lead to a new revolution in news and information services. Several companies are already in this space, but we’re just at the beginning of this revolution. We live in exciting times.

Learning from a failed journalism project

I want to applaud Jen Lee Reeves who wrote about the mistakes that she made for a journalism project that she worked on for the 2008 elections in the US at PBS’ MediaShift blog. It’s a brave thing to do, but her courage flags up a number of mistakes that are common to journalism projects, including a few that I have made myself.

She is an “associate professor at the Missouri School of Journalism, I am also a new media director at the university-owned NBC-affiliate, KOMU-TV”, and for the elections, she had an ambitious idea to bring together the coverage of several different outlets “to make it easier for news consumers to learn about their candidates leading up to election day”. She would complete the project during a fellowship at the Reynolds Journalism Institute at the University of Missouri.

In 2006 for the mid-term elections in the US, she had done something similar, but the site had been hand-coded. (I’m assuming what she means is that there was no content management system.) She realised that this would be too cumbersome, but in 2008, she opted for a “hand-built” site created by students with her oversight. Technically, she was moving in the right direction. The site took in RSS feeds from the participating news organisations, and web managers simply had to tag the content so that it appeared in relation to the right candidate and election. However, while, the site was easier to user for the news organisations, it still wasn’t clear enough to use for the audience. She said:

Unfortunately, our site was not simple. It was not clean and it was hand built by students with my oversight. It did not have a welcoming user experience. It did not encourage participation. I had a vision, but I lacked the technical ability to create a user-friendly site. I figured the content would rule and people would come to it. Not a great assumption.

Back in 2008, I still had old-school thoughts in my head. I thought media could lead the masses by informing voters who were hungry for details about candidates. I thought a project’s content was more important than user experience. I thought I knew what I was talking about.

She goes on and lists assumptions that she had about the audience, assumptions which proved false and which she believes doomed the project for failure. Go to her post and read them. She is grateful that she had the opportunity to experiment and make mistakes during her fellowship, an opportunity that she says she wouldn’t have had while being in charge of a newsroom.

If we’re paralysed by fear of failure, we’ll never do anything new. It’s not failure that we should fear but rather the inability to learn from our mistakes. For big projects like this, it’s really important to have a proper debrief. Free services on the web can bring down the cost of experimentation, and by testing what works and what doesn’t, we can not only learn from our mistakes but also make sure that we take best practices to our next project.

KPMG: UK readers far less willing to pay for digital content

Normally, I’d just add this link to Delicious, but the data is worth highlighting. KPMG has found that 81% of UK “would go elsewhere for content if a previously free site we use frequently began charging”. Only 19% would be willing to pay in the UK, while globally (the same research looked at consumer behaviour in a range of countries) 43% of consumers are willing to pay for digital content.

However, there are possibilities for publishers to pay for content that “almost three quarters of  UK consumers are willing to receive online ads in exchange lower content costs”. They are also more willing to have data collected if it would result in lower content costs. “48 percent of UK consumers would be willing to accept profile tracking, up from 35 percent in the 2008 survey.” Publishers and marketers need to take care though as 90% of consumers also expressed concern about their privacy and security online. That is high, although a slight reduction in the figures from 18 months ago.

A key finding from the report shows how consumers would like to balance privacy and targeted advertising. Tudor Aw, Head of Technology, KPMG Europe LLP, said:

(UK consumers) do see the value in allowing service providers to have access to the information necessary for more tailored services, but they are only prepared to do this if the risks are controlled and, crucially, if there is some value in it for them.

The research is well worth a look, especially for those whose revenue strategies are tied to advertising, but also for any business looking to deliver better targeted services to their customers through better use of data.

Will publishers rally to Google’s Newspass?

Matthew Buckland has a great guest post on Silicon Valley Watcher looking at Google’s Newspass payment system for publishers. (It’s cross-posted from memeburn.com)  Buckland compares the value proposition for users with Google’s system and the system that Rupert Murdoch has instituted at The Times. He likens Newspass to a cable television subscription in which a consumer makes a one off, predictable payment to receive a package of content each month. He says:

Take the analogy of satellite TV. You pay once and you get a bouquet of hundreds of channels. The transaction is simple and easy. You know you’re getting good value for money too because there is an economy-of-scale effect at work. Now imagine another scenario: What if you have to pay individually for each TV channel and go through the effort, time and extra cost to do so. It’s a no brainer really.

Of course from the consumer’s point of view, it makes a lot more sense to pay once for a bundle of content rather than paying subs to several different providers or micro-payments for individual pieces of content. However, if newspaper groups had the rationality to think about creating value propositions for their readers, they might have spared themselves the mess that they are in.

The big question, as Matthew highlights, is whether a significant number of publishers will choose to join Newspass or create their own payment system. I’m not sure that such a payment system would be possible in all jurisdictions based on competition/anti-trust law. That notwithstanding, knowing publishers, I would expect them to lobby for a relaxation of anti-competition laws in their own countries to make such a system possible rather than partner with Google, which they have as Matthew rightly points out, a love-hate relationship with. I’d say that it’s bordering on hate-hate these days, but that’s a matter of interpretation.

Matthew sees Google as a “dispassionate third party”, but with the egos in publishing and the ‘not invented here bias’, I’m not sure that publishers see Google as dispassionate or without skin in the game. Murdoch and his lieutenants, though possible an extreme example, refer to Google as a “parasite”. For them to be pushed into partnering with the likes of Google, they would have to be pressured into seeing past their almost self-destructively hyper-competitive natures and see that some loss of advantage was worth new revenue streams. In fact, I would see them being more open to partnering with another company just in an attempt to screw Google. Despite the existential threat facing some newspapers and newspaper groups, I’m not sure that they have seen the light, by which I mean the light some reportedly see with near-death experiences.

The value of data for readers and the newsroom

When I was at the BBC, a very smart producer, Gill Parker, approached me about pulling together a massive amount of data and information she was collecting with Frank Gardner trying to unravel the events that lead to the 11 September 2001 attacks in the US. Not only had Gill worked on the BBC’s flagship current affairs programme Newsnight and on ABC’s Nightline in the US, she also had worked in the technology industry. They were interviewing law enforcement and security sources all around the world and collecting masses of information which they all had in Microsoft Word files. She knew that they needed something else to help them connect the dots, and speaking with me in Washington where I was working as BBCNews.com’s Washington correspondent at the time, she asked if help her get some database help.

I thought it was a great idea. My view was that by helping her organise all of the information that they were collecting, the News website could use the resulting database to develop info-graphics and other interactives that would help our audience better understand the complex story. We could help show relationships between all of the main actors in al Qaeda as well as walk people through an interactive timeline of events. I had a vision of displaying the information on a globe. People could move through time and see various events with key actors in the story. This was a bit beyond the technology of the time. Google Earth was still a few years away, and it would have required significant development for some of the visualisations. However, on a story like this, I thought we could justify the effort, and frankly, we didn’t need to go that far. Bottom line: Organising the data would have huge benefits for BBC journalists and also for our audiences.

?Unfortunately, it was the beginning of several years of cuts at the BBC, and the News website was coming under pressure. It was beyond the scope of what I had time to do or could do in my position, and we didn’t have database developers at the website who could be spared, I was told.

A few years later as Google Earth developed, Declan Butler at Nature used data of the spread of the H5N1virus globally to achieve something like the vision I had in terms of showing events over time and distance.

It is great to see my friend and former Guardian colleague Simon Rogers move forward with this thinking of data as a resource both internally to help journalists and also externally to help explain a complex story in his work on the Wikileaks War Logs story. Simon wrote about it on the Guardian Datablog:

we needed to make the data easier to use for our team of investigative reporters: David Leigh, Nick Davies, Declan Walsh, Simon Tisdall, Richard Norton-Taylor. We also wanted to make it simpler to access key information for you, out there in the real world – as clear and open as we could make it.

As the digital research editor at The Guardian, data was key to many of my ideas (before I left this March to pursue my own projects). I even thought that data could become a source of revenue for The Guardian. Data and analysis is something that people are willing to pay for. Ben Ayers, the Head of social media and community at ITV.com, (speaking for himself not ITV) said to me on Twitter:

Brilliant. I’d pay for that stuff. Surely the kind of value that could be, er, charged for. Just sayin’ … just an example of where, if people expect great interpretation of data as part of the package, the Guardian could charge subs

As I replied to Ben, I wouldn’t advocate charging for data for the War Logs, but I would suggest that charging for data about media, business and sports. That could become an important source of income to help subsidise the cost of investigations like the War Logs. Data wrangling can be time intensive. I know from my experience in developing the media job cuts series that I wrote at the end of 2009 for The Guardian. However, the data can be a great resource for journalists writing stories as well as developing interactive graphics like the media job cuts map or the IED attack map for the War Logs story. Data drives traffic, as the Texas Tribune in the US has found, and I believe that certain datasets could be developed into new commercial products for news organisations.

Annenberg-Oxford Summer Institute: Continuing the Conversation

A couple of years ago, I spoke at the Oxford Internet Institute, and after my talk, the conversation carried on via Strange Attractor and the blogs written by some of the students there. I went back to Oxford today to talk about social media, journalism and broader media trends with the very international group of “scholars and regulators? at the Annenberg-Oxford Summer Institute.

As I did from my talk a few years ago at the OII, I’ll follow up some questions that came after my talk and some questions that came in via Twitter.

Does participatory media make public service media obsolete?

I met Shawn Powers at the Al Jazeera Forum in Doha in May, and he invited me to give a talk at the institute. After my talk, he highlighted what he thought was a contradiction in my presentation, which he thought could be interpreted as supporting James Murdoch’s attack on the BBC. Not to over-simplify his point, but with all of the examples I gave of people creating their own media, Shawn wondered if I was making the point that British society no longer needed a public broadcaster like the BBC.

It never really occurred to me that my presentation could be interpreted like this because four years after I left the BBC, I value public service media even more than when I was working there. Most of the examples I talk about in my presentation (a version is here on SlideShare) are collaborations between professional journalists and members of the public not examples of the public supplanting or replacing journalists.

When I came to London in 2005 to research how BBC News could use blogging, I actually saw the possibility of a public service broadcaster like the BBC deepening its public role by developing stronger relationships with people formerly known as the audience.

James Murdoch’s argument delivered in Edinburgh last year:

We seem to have decided to let independence and plurality wither. To let the BBC throttle the news market, and get bigger to compensate

I see commercial media and public service media combined with emerging participatory media as creating greater plurality, not throttling it. Murdoch’s argument is a rather unsophisticated and transparent attack on the BBC because he knows that most surveys show that when consumers are asked to pay for news online, most of them (74%) would switch to free options, such as the BBC. Only about 5% in the paidcontent.co.uk and Harris survey would pay to continue to use the service. (For a good critique of the Murdochs’ hard paywall that they just erected around The Times and The Sunday Times, see Steve Outing’s look at different commercial strategies.)

Returning to the strategic white paper I wrote for the BBC, I also thought by encouraging media creation by a wider part of the population that it actually would expand civic participation in new ways and possibly reverse trends in the decline in traditional forms of democratic participation such as voting. (Andy Carvin at NPR is demonstrating how social media is public service media can be a powerful combination.)

Maybe in the future, I should start with a statement of principles or values. I assume that my career choices say a lot about my journalistic values. I have worked for two very unique journalism organisations, the publicly-funded BBC and the trust-supported Guardian. It was an honour to work at two places that value journalism as much as the BBC and The Guardian.  I don’t see social media as an argument for ending subsidies to public media in favour of a “pure” market-based media eco-system. Rather, I see my interest in social media as a perfectly logical extension of my passion for the social mission of journalism, a mission to inform and engage people and to empower them as citizens in democratic societies.

Choosing the right tool for the job

Another person at the institute raised the issue of whether I was focusing on the tools rather than the editorial goals. Was I seeing social media as the hammer and every story as a nail?

In reality, I’ve long argued against using a tool for the sake of using a tool. In my original presentation at the BBC, one of my slides was a herd of cattle with a little Photoshopped brand on one of the bulls labelled MSM (mainstream media), complete with the song Rawhide playing in the background. I said that the media was engaging in a lot of herd-like behaviour, rushing off to blog without any clear reason as to why. I used to play a clip of Jon Stewart of the Daily Show sarcastically congratulating MSNBC and their blogging efforts as “giving a voice to the already voiced”. I questioned why the media needed blogs when we already had publishing platforms.

To justify blogging, we had to have clear editorial goals and not just blog because it was the new media flavour of the month. I did see benefits in blogging and using social media. We could engage our audiences directly and take our journalism to where they were instead of relying on them to come our site. We could enhance our journalism by expanding our sources, adding new voices and highlighting expertise in our audience.

Often people saw blogging not as a conversational, engagement focused media but as a means to secure their own column. They didn’t want to write more than once a week. They had no interest in actually responding to comments. Although I didn’t see this as an appropriate use of blogging, usually, they got a blog because I wasn’t in a position to deny them one.

It’s important to understand that social media is only one tool in a journalist’s toolkit. It is powerful, but it is very important to understand when it is appropriate to use social media and when it isn’t.

As someone at Oxford also pointed out, as journalists we need to make sure that we don’t over-interpret opinion on Twitter, Facebook and other social networks as truly representative. I often use social networks and blogging to find expertise and first person experience of an event, not necessarily to canvas for opinion. The same student at Oxford also was concerned that journalists would rely solely on online social networks to source stories or generate story ideas. That’s the mark of either a lazy journalist or one who is so overburdened with work due to staffing cuts that social media becomes an all too easy shortcut. (I understand only too well the time pressures that journalists are under due to the hollowing out of newsrooms.)

Do location-based networks have staying power?

One of the students told me that she had asked a few questions via Twitter while I was talking, and here is one of her questions:

#AnOx10 Kevin Anderson @kevglobal– Social Media for Social Change: great talk today but do u really think Loc-base has staying power?

I’ve been working with location for a couple of years ago, starting with my coverage of the US elections in 2008. I’ve been testing location-based networks like BrightKite and the location features with Twitter since 2008, and I’ve been trying the newer networks such as FourSquare in preparation for a keynote that I’m giving at the SpotOn conference in Helsinki in September.

As I started saying in 2005, in this age of information-overload, two things are key to success: Relationship and relevance. Social media allows news organisations to much more directly build and maintain their relationships with both members of the public who simply want to consume their content and also with people who want to collaborate or contribute to news coverage. In a world with so many information choices, relevance is extremely valuable. This weekend, I spoke to the Gates Scholars at Cambridge, and many of the questions to the panel that I was on were about finding and filtering the vast ocean of information available. To me location is one filter for relevance.

There are two ways to interpret this question: Will the current generation of location-based networks have staying power? Will location itself have staying power?

In using FourSquare, I actually find the game element rather simplistic. Without a native app on my Nokia N82 (am considering buying Gravity, but its £8 is higher than my impulse threshold for buying a mobile app), the friction is too high for me. I am too aware that FourSquare is trying to trick me into surfacing my location. For Google’s Latitude, I set it and forget it, and I see my friends on my Google Map. That service hasn’t hit a critical mass of users in my offline social networks to be all that useful.

However, in convincing people to reveal their location, FourSquare is already beginning to partner with media and other companies to sell other location-based services. Frankly, I don’t need the psychological trickery of points and mayor-ships to get me to check-in, if I get a useful service from revealing my location.

That’s where I see location being interesting, not as an element of games like Gowalla or FourSquare, but as a fundamental enabling technology like RSS. Very few people use RSS directly in standalone readers as I do, but many more people use RSS without even knowing it. Location will be one of those underlying, enabling technologies.

The big difference between RSS and location is the issue of privacy and security connected to revealing one’s location. Lots of people follow me on Twitter who I don’t know. I have a category of contacts on Facebook “People I don’t know”. I am not going to let people I don’t know in the real world know where I am in the real world. I’m working through whether I want to be selective in my contacts on FourSquare or selective in checking in.

Location is going to be a powerful feature in new services. That has staying power. Part of me thinks that services like Gowalla and FourSquare are very first generation at this point. They have a certain Friendster feel about them. However, FourSquare is evolving very quickly, and its very clear business model means that it will have the space to experiment.

Those are the questions that I can think of off the top of my head. If people have more, leave a comment. I’ll try to answer them before Suw and I start our summer break on Thursday.

Battle for the Living Room: Apple TV shifting to app strategy?

The living room (lounge for UK readers) is one of the most interesting tech spaces right now, and it’s got nothing to do with 3-D TV. (Just for the record, I’ve been referring to this as the Battle for the Living Room for a while now, lest anyone think I’m just ripping off Mashable headlines.) The blurring of the lines between internet video and broadcast television and between computers and traditional televisions is bringing consumer electronics companies and computer companies into a new competitive space.

Nick Bilton at the New York Times’ Bits Blog looks at how Apple could be looking to re-invent its rather sleepy Apple TV line. One of the big changes is that a new Apple TV could be based on the iOS that powers Apple’s iPhone and iPad. Why is this important?

If Apple does use the iOS software, it would allow people to download applications like the Netflix app, which allows streaming movies and TV shows; ABC’s TV player; or Hulu’s latest video streaming application.

This space is getting very crowded. As both Mashable and Nick pointed out, Google and Sony are going to launch Google TV. It will be based on its Android operating system, and an Android marketplace for Google TV will launch in early 2011.

Alt media centre software maker Boxee has its own apps and has launched its own hardware, the first Boxee box is coming from D-Link. (It was supposed to be out in the second quarter of this year, but it has now been delayed until November.)

Here in the UK, the BBC has won approval to proceed with its own project to bring its iPlayer catch-up service to the living room with Project Canvas. What is Project Canvas? From a story on the BBC News website:

Project Canvas is a partnership between the BBC, ITV, BT, Five, Channel 4 and TalkTalk to develop a so-called Internet Protocol Television standard.

The technology will be built into a number of set-top boxes. However, Canvas is UK-only, and as Robert Andrews at paidContent points out, there is a pan-European standard that has beaten Canvas to market: HbbTV.

Of course, hyper-competitive also leaves the potential for consumer confusion, and this looks like it might make the VHS v Beta battle look like minor scrap. Right now, we’re in the gold rush period, with a mad dash by a lot of major players to dominate this space. It’s very early days, and a lot of the products are little more than announcements. What is very interesting is that we’ve got a lot of major companies coming from sectors that previously didn’t overlap that much apart from some of the major Japanese players. They will not back down without a fight. It will be very interesting to see what our living rooms look like in 2015.