The Guardian needs an intervention

The Guardian and its Sunday title, The Observer have just announced a “digital-first” strategy. However, this is not a triumphant announcement. This is a burning platform admission.

Guardian News & Media, the parent company for both newspapers, lost £33m on a cash basis for the year ending 31 March, only slightly less than it’s £34.4m loss for the previous year. Guardian Media Group chief executive Andrew Miller warned that the group could run out of money in 3-5 years if things don’t change. I heard sobering burn rate figures when I was at The Guardian. I covered the dot.com boom and heard start-ups talk cash on hand, but I never expected to hear this from a major media company.

Some things leapt out at me: They reported £47m in digital revenues out of a total of £198m revenues. Digital made just shy of 24% of total revenue. That’s good going, and most newspapers would kill for that percentage of digital revenues. (Apart from the FT, which is making a killing from digital: 30% or revenue from digital now and projected to reach 50% of revenue by 2013.)

This came out from the presentation to Guardian staff:

Unaudited results for the year ending 31 March showed that revenues at Guardian News & Media, the immediate parent of the newspapers and guardian.co.uk, fell to £198m last year compared with £221m the year before, a fall in revenues that reflected a sharp fall in classified advertising. Recruitment advertising has fallen by £41m in the past four years.

The Guardian is seen as one of the most innovative newspapers in the world. It was why I enthusiastically joined them in 2006. They announced they were going web-first in June 2006, but that didn’t and doesn’t change the fact that the newspaper is burning through cash. To future of journalism folks, The Guardian is indicative of challenges facing the industry, but so far it’s not showing the way forward in solving those challenges.

Feel free to give The Guardian credit for being innovative, but everyone in the journalism community has to be more honest and realistic about its business challenges. It’s in the same sinking boat as a lot of other newspapers.

Guardian Editor Alan Rusbridger is saying that not only will they publish first to the web but that they will do less in print. The Guardian’s article says there will be no job cuts, though they have to find £25m in savings. Yet Mathew Ingram at GigaOm quotes Alan as saying there will be editorial job cuts.

Mathew also quotes Alan as saying that they have identified at least ten different revenue streams. That’s comforting. But it speaks volumes that The Guardian’s own article doesn’t mention new revenue, and Alan only mentioned existing digital revenue streams to Mathew.

The Guardian needs an intervention. Digital first will not be enough to save it. It needs to remember that although they are supported by a trust, that is not a licence to completely ignore business realities. Here is my bit of tough love:

1. Building a sustainable business is not evil

The Guardian needs to realise that making money to support journalism is no sin. There is a lot of moral space between being a sustainable journalism enterprise and being a voracious media mogul like Rupert Murdoch. I’d love to see The Guardian demonstrate how to create a financially sustainable journalism business, but it will have to challenge its own anti-commercial culture.

2. Editorial innovation alone is not enough

The Guardian is innovative, but it also shows that technical and editorial innovation are not enough on their own to guarantee a sustainable journalism business. Digital first without a business focus will still leave it in dire straits. If The Guardian is going to devote 80% of its resources to digital, as is implied by Dan Sabagh’s article, it has got to develop new revenue streams to support its digital first strategy.

3. ‘Open’ without a business model is an empty ideology

I love the open web. I think The Times hard paywall is foolish. However, the ideology of open from The Guardian lacks pragmatism. The Rupert v Rusbridger battle makes a good media ding-bong, but neither positions are proving able to solve the problems that face newspapers. (Yes, I’ve seen Guardian digital strategist Matt McAlister’s presentation on generative media networks. Hopefully, some of that strategy will be part of these 10 revenue streams. At the moment, I remain unconvinced.)

4. You’ve got a golden brand. Capitalise on it.

At the risk of sounding critical, I joke with people that The Guardian has the brand of Apple but the business focus of Twitter. Guardian readers are some of the most loyal in the world. When The Guardian recently cut short its well regarded local project, readers offered money to help it continue. Most newspapers would love to have that affection and loyalty. If The Guardian can’t capitalise on its loyal audience, incompetence will be the only explanation.

A friend of mine, who had taken a buyout from a US newspaper, said to me after visiting The Guardian a few years ago:

The Guardian seems like a great place to work when the times are good, but it doesn’t seem capable of making the tough decisions when the times are tough.

The Guardian has time to make some relatively easy decisions to ensure its future, but it needs to get serious, not just about digital but about its business. The Guardian’s often lauded as the future of journalism, but without a sound business model, it doesn’t have a future.

Direct visits: A referral data black hole

Facebook drives more traffic than Twitter” ran the headline in May, after a Pew study seemed to show that Twitter just wasn’t as good for traffic numbers as people had thought. But there were problems with the study’s methodology, as many people, including Steve Buttry said:

The PEJ report acknowledges that the Nielsen Co., the source of all the data studied, relies “mainly on home-based traffic rather than work-based,” without adding that most use of news sites comes during the workday.

and

The study uses strongly dismissive language about Twitter’s contribution to traffic to news sites. But it never notes that many – probably most – Twitter users come from TweetDeck, HootSuite, mobile apps or some other source than Twitter.com. Twitter “barely registers as a referring source,” the report concludes, ignoring or ignorant of the fact that the data counted only traffic from Twitter.com and did not count most visits from Twitter users.

As the web evolves, so the tools that we use to measure and assess activity need to evolve, but this hasn’t really happened. We might have managed to ditch the misleading idea of ‘hits’, but web traffic measurement is still immature, with many of the tools remaining basic and unevolved. But this problem is only going to get worse, as Steve’s second point hints at.

As I mentioned in this post, earlier this year I did some work looking at referrer logs for a client, OldWeather.org, a citizen science project that is transcribing weather and other data from old ships logs. One of the things that I noticed was how messy Google Analytics’ data is when it comes to finding out which social networks people have visited from. Many social networks have multiple possible URLs which show up in the stats as separate referrers. For example, Facebook has:

  • facebook.com
  • m.facebook.com
  • touch.facebook.com

And Twitter has:

  • twitter.com
  • mobile.twitter.com

So in order to get a better picture of activity from Facebook and Twitter, we need to add the numbers for these subdomains together. But that alone doesn’t provide the full picture. A list compiled by Twitstat.com in August of last year showed that only 13.9% of its users were using the Twitter.com website, with another ~1% using Twitter’s mobile website. That means around 85% of Twitter users are not going to show up in the twitter.com referrals because they haven’t come from twitter.com or mobile.twitter.com.

It is possible to get some other hints about Twitter traffic as some web-based clients do provide referral data, e.g. twittergadget.com, brizzly.com, hootsuite.com or seesmic.com. But the big problem is that much of the traffic from Twitter clients will simply show up in your stats as direct visits, essentially becoming completely invisible. And when direct visits make up 40% of your traffic, that’s a huge black hole in your data.

It used to be assumed that direct visits were people who had your website bookmarked in their browser or who were typing your URL directly into their browser’s address bar. The advent of desktop Twitter clients has undermined this assumption completely, and we need to update our thinking about what a ‘direct visit’ is.

This obfuscation of traffic origins is only going to get worse as clients provide access to other tools. Tweetdeck, for example, can no longer be assumed to be a Twitter-only client, because it also allows you to access your LinkedIn, Facebook, MySpace, Google Buzz and Foursquare accounts. So even if you can spot that a referral has come via Tweetdeck, you have no idea whether the user clicked on a link from their Twitter stream, or via Facebook, LinkedIn, etc.

This makes understanding the success of your social media strategy and, in particular, understanding which tools/networks are performing most strongly, nigh on impossible. What if 20% of your traffic is coming from invisible Twitter clients and only 1% comes from Twitter.com? Because the majority of your Twitter traffic is hidden as direct traffic you might end up sensibly but wrongly focusing on the 5% that has come via Facebook.com, thus reworking your strategy to put more effort into Facebook despite the fact it is actually performing poorly in comparison to Twitter.

I recommend to all my clients that they keep an eye on their statistics and that if a tool isn’t working out well for them, that they should ditch it and move on to another. There are so many social networks around that you just can’t be everywhere, you must prioritise your efforts and focus on the networks where you are most likely to reach your target audience. But we need to have clarity in the stats in order to do this.

The scale of this problem is really only becoming clear to me as I type this. For sites with low direct traffic, a bit of fuzziness in the stats isn’t a big deal, but for sites with a lot of direct traffic – and I see some sites with over 40% direct traffic – this is a serious issue. You could potentially have a single referring source making up a huge part of your total traffic, and you’d never know. And as more services provide APIs that can feed more desktop clients, which themselves provide more functionality than the original service itself, the growth of wrongly attributed ‘direct visits’ is only going to accelerate.

Without meaningful numbers, we’re back to the bad old days of gut feeling about whether one strategy is working better than another. I already see people making huge assumptions about how well Facebook is going to work for them, based on the faulty logic that everyone’s in Facebook, ergo by being in Facebook they will reach everyone.

Now, more than ever, we need reliable web stats so that we can make informed decisions, but these numbers are turning out to be like ghosts: our brains see what they want to see, not what is actually there. Even established research institutions like Pew are suffering pareidolia, seeing a phantom Facebook in their flawed numbers.

Ken Doctor digs into the economics of HTML5

Ken Doctor who writes the excellent Newsonomics blog says of the FT’s HTML5 web app:

We first heard of HTML5 as an alternative to Adobe’s Flash as Apple excluded Flash from its products. HTML 5, though, has proven to be a strong foundation for next-generation digital product development (“The Newsonomics of Apps and HTML5?). HTML5 is also the basis for web apps, and it is web apps — those browser-based apps the FT is trumpeting today — that are now providing tech and business competition to native apps.

via FT Declares Independence (from Apple) Day | Newsonomics.

Ken continues on to dig into the FT’s digital business and why they might be in a unique position with respect to other publishes due to their decade-long strategic development of their digital business.

When people and clients ask to me about good models for digital news operations, I first point out that having good digital content offerings isn’t enough. You can have amazing, world-beating editorial, but if it isn’t supported by a sustainable digital business, that’s not a model to emulate.

Housekeeping: New sharing options?

One of my favourite browser plug-ins is Shareaholic, and they have a WordPress plug-in called SexyBookmarks. We’ve been thinking about adding some more sharing and recommendation options to Strange Attractor. Let us know what you think.

  • Does it slow down the page load for you?
  • Do you like the options?
  • Are there other sharing options you’d like us to include?
  • For our RSS readers, do you like the sharing options in the feed?

Thanks in advance for your feedback.

The FT and NPR: HTML5 as part of a multi-platform strategy

I had heard that the FT and Apple were struggling to come to an agreement on digital subscriptions, so it came as no surprise to me that the FT has launched an HTML5 web app. Some folks have added sneer quotes around app, but I’m not going to. The HTML5 version of the FT’s app looks, behaves and has even more functionality than their native iPad app.

Robert Andrew of paidContent: UK has a great interview with Rob Grimshaw, The Financial Times’ online managing director, on the issues that separate the two companies. The subscription issues are well known, and it’s not just Apple’s 30% take that has publishers pissed off. Publishers are also uncomfortable letting Apple get between them, their customers and customer data. I’m impressed with the maturity that the FT has demonstrated here. Rather than play up the conflict and engage in an all too typical media industry drama queen spat, the FT used the potential impasse to explore what would be possible with HTML5, the next version of the web mark-up standard. Grimshaw said:

It’s not just Apple versus FT – there is more to it than that. We started to look at HTML middle of last year when we realised how complicated it would be to develop applications for all these different platforms.

The FT believes that it hasn’t had to compromise. I gave the app a spin this morning on our first gen iPad. The execution is extremely polished, walking you through every step from adding it to your home screen to giving the app increased offline storage space. The app is not only identical to the native app experience, it also has a few extras. The native app allows you to choose a live or a downloaded version. The web app automatically caches the content on load. Unlike the native app, the web app also supports the FT’s video content offline. That’s a real bonus – I often read the FT on the iPad on flights and missed the video content. (I actually prefer the iPad version to print. When I don’t travel with the iPad and get the paper, I often struggle not to punch my neighbour when wrestling with the broadsheet. I have no such issue with the iPad.)

I will agree with some comments online today that said it is a little sluggish on the first gen iPad. On the iPad 2 and Xoom, dual-core tablets with better graphics, I would expect the web app to fly. On Suw’s now creaky iPhone 3G, the app gently let us know that the device was too slow before elegantly redirecting us to the FT’s excellent mobile website. Nice. It puts most other UK mobile newspaper sites to shame, though for my money, the New York Times still has the best mobile site – fast, clean and easy to use. For comparison, I’d also recommend that you check out Firstpost.com, a site that Suw and I helped Network 18 of India launch in May.  The site uses WordPress and launched with a great mobile version through the use of the Mobile Detector plug-in, which can detect more than 5000 mobile devices and serve and experience relevant to the device.

The FT head of mobile, Steve Pinches, has an explanation about the work that went into the FT HTML5 app. He echoes Grimshaw’s point about development costs:

developing multiple ‘native’ apps for various products is logistically and financially unmanageable. By having one core codebase, we can roll the FT app onto multiple platforms at once.

For another example of what’s possible with HTML5 and cross-device coding, check out NPR’s app for Chrome.  It looks exactly like the US public radio broadcaster’s iPad app, but it runs in Google’s Chrome web browser. NPR explained how it was done:

Like to get your geek on? Well, you’ll be happy to know that NPR for Chrome leverages the power of HTML5. Using a technology called Sproutcore, this web app has the potential to work in other modern browsers, on tablets, and even be repurposed for other app stores.

Smart. Ben Ayers, formerly of ITV, and I had little discussion this morning about how HTML5 might allow these apps to run not just on smartphones, tablets and computer web browsers but also on connected TVs.

[blackbirdpie url=”http://twitter.com/#!/benayers/status/78009292824907776″]

Leaving Google TV to one side for a moment, LG’s new smart TV platform uses webkit, which underpins many browsers including Apple’s Safari and Google’s Chrome. From an interface standpoint, I’m not going to suggest an interface for a mobile phone would appropriate for the “ten-foot” experience of TV, but device detection and CSS can help serve up an appropriate interface.

As HTML5 matures over the next few years, this will be the standard that enables the next wave of cross-platform innovation. The combination of APIs, CSS and HTML5 could make the painful process of developing apps for multiple platforms and multiple screen sizes a thing of the past. In the meantime, it’s great to see what HTML5 is capable of.

News organisation web stats: Break out bounce

Frédéric Filloux looks at the metered paid content systems that the FT an the New York Times have in place in his most recent post. I have yet to be sold on how the New York Times is trying to segment their readership based on platform, but I think they are doing the right thing in terms of trying to get their most loyal readers to help support their journalism. I also like how they are trying to reward their most loyal readers with extras, such as their behind the scenes report on how they covered the mission that killed Osama bin Laden.

Frédéric touches on the issue of loyalty in his post.

One the dirtiest little secrets of the online media business is the actual number of truly loyal readers — as opposed to fly-bys. No one really wants to know (let alone let anyone else know). Using a broad brush, about half of the audience is composed of casual users dropping by less than 3 times a month, or sent by search engines; 25% come more than 10 times a month.

Spot on, and I think there is a lot of evidence to support his assertion that this has contributed to an erosion in advertising prices. Advertisers know that not all unique users are created equal. If a user views a single page during a visit, or even worse, is on a site less than 5 seconds, they might be counted as a unique user or visitor, but they are next to meaningless in terms of engagement with content and completely meaningless to an advertiser.

It’s quite clear that raw audience numbers do not a sustainable digital content business make. If that were the case, digital would be contributing significantly more to the bottom line than the 15% average that US newspapers are seeing. If this was the case, The Daily Mail would be making a mint off of its newly found digital success. The Mail has not only rushed ahead of its online competitors in the UK, but in April, it became the second most popular English-language ‘newspaper’ site in the world. (Quotes around newspaper because I’m not sure how the Huffington Post is considered a newspaper site, and if you were to include other news sites such as the BBC not to mention Yahoo News, that league table would look a lot different.) However, the Mail is squeezing paltry sums out of that audience, about 2p per visitor across Mail Online and metro.co.uk. (Rob Andrews at paidContent also points out in the same piece that DMGT makes most of its digital income, some £44m, from a separate digital division that operates travel, jobs and motoring ad services.)

The move from monthly uniques to average daily uniques has eliminated some double-counting from the stats, but it still doesn’t break out these fly-by visitors. The industry has to move to more honest and realistic metrics. In the UK, newspapers no longer report bulk print sales. I’d argue that it’s time to at the very least break out ‘bounce’, single-page, less than 5 second visitors (or however the industry wants to transparently measure it). If the industry really wanted to come clean, they’d just leave bounce out of the stats entirely. It’s meaningless traffic, the internet version of channel surfers. Loyalty is the new coin of the digital realm, and I’d wager that if we focus on that, it might even bring in a bit more coin.

Understanding Grímsvötn

Another Icelandic volcano has blown its top and, as you might expect, the media has gone batshit. Even otherwise commendable publications like Nature have lost their heads and are calling Grímsvötn “the new Eyjafjallajökull” (hint: it’s completely different). So here’s a quick look at the key information sources you need to understand what’s going on.

Firstly, let’s just talk about pronunciation. Whereas I could understand the reluctance to attempt Eyjafjallajökull, even though it’s not that hard once you’re got your tongue round it, Grímsvötn is much easier. An Icelandic friend says the í is like the ‘ea’ in ‘eating’ and ö is a bit like the e in ‘the’ or the u in ‘duh’ so basically a bit of a schwa. Repeat after me, then: Greamsvuhtn. Easy. Yet despite it being a relatively simple name to pronounce, at least one BBC news presenter bottled it and said something like “A volcano in Iceland” and, instead of tackling Eyjafjallajökull said, “Another volcano in Iceland”… Wimp.

Right, so, horses’ mouths. There are plenty of them, so there’s no excuse for asking the Independent’s travel editor for comment (BBC, I’m lookin’ at you again!), who frankly probably knows jack shit about volcanos. Your key sources for Icelandic eruptions are:

1. The Icelandic Met Office
The IMO provides so much data that it’s hard to see why so many news orgs ignore it. You don’t get much closer to the horse’s mouth than this and, shock-horror, they speak English! Good lord, who’d’ve thunk it. Key pages on the IMO website:

  • News: Not updated very often, but still an important source
  • Updates: Updated more regularly, more useful info and links
  • Earthquakes: Last 48 hours worth of earthquakes. It’d be awesome if someone captured this and made a nice visualisation. And if you’re missing data, just email and ask them – they’re very nice, as I found out last year when they sent me the archival data for Eyjafjallajökull.

The IMO have a lot more data, such as tremor, inflation, and seismic moment, but it will take an expert to interpret that for you.

2. The VAAC
The Volcanic Ash Advisory Centre is run by the UK Met Office and provides maps of the ash cloud forecasts, which it updates regularly. Key links:

 

If you look at the full size version of this, you’ll see more clearly that there are three coloured lines: The blue line is labeled FL350/FL550, the green line is FL200/FL350 and the red line is SCF/FL200. The blue line is the highest part of the ash cloud between FL350 and FL550, i.e. between 35,000 and 55,000 feet. FL means “flight level” and the number is how many hundreds of feet above ground level you’re looking at. The green line is between 20,000 and 35,000 ft, which is about where jets cruise (at 33,000 ft), and the red line is between surface and 20,000 ft. VAGs are produced regularly and include four forecasts at 6 hour intervals.

The thing to remember about these VAGs is that they are forecasts based on current volcanic activity and wind forecasts, so they can and do change.

3. Regulators & air traffic control
At this stage, I’d love to say that the regulators and air traffic control bodies are a great source of info, but they’re not. That’s not going to stop me giving you their links, though.

  • UK Civil Aviation Authority. They also have a Twitter account, but haven’t yet got to grips with the idea of giving people useful information.
  • NATS: The National Air Traffic Services are giving regular updates, but it’s not particularly detailed. I’m pretty sure that the now ‘unofficial’ Twitter account was official this time last year, but either way, NATS should sort out their Twitter presence.
  • EuroControl: The EU air traffic control, also on Twitter, but doing a slightly better job of it.

I would like someone to slap the CAA, NATS and to some extent Eurocontrol round the chops and insist that they get their online acts together. They may think they have something better to do than communicate with the public, but frankly, I can’t think what it might be. At times like this, we need informed voices from the organisations making and implementing policy decisions to be communicating directly with the public, to counteract the uninformed nonsense we’re fed by our media. Right now, it’s just one great big mess of fail and it’s very disappointing. If any one of you organisations get in touch with me, I’ll go so far as to give you a discount just to see you actually start to engage properly.

4. Erik Klemetti
Frankly, Erik’s work on the Eruptions blog, gathering links and keeping us up to date with what’s happening, blows all the official sources out of the water. Erik has created an awesome community of  people who are constantly on the look out for news and information and sharing it in the comments and, from that smorgasbord, he picks the best links for his posts and provides an expert view on what’s happening as well as some highly accessible explanations. This, to be honest, is the kind of stuff we should be seeing from the UK Met Office, the CAA, NATS and Eurocontrol, not to mention the media.

5. FlightRadar24
Always a fascinating site, FlightRadar24 has now added an ‘Ash Layer’ which superimposes the current forecasts on to their radar map of all the planes currently in the air. Well worth a peek.

6. Mila
Mila have a number of webcams up around Iceland. Currently there’s one working webcam trained on Grímsvötn, and although the picture’s a bit wobbly, when the sun’s up you can clearly see what’s going on. Or not going on: Right now, there’s no plume, but that can of course change at a moment’s notice.

 

So, that gives you a bunch of sources to check when you want to know what’s going on and you can’t find any actual information in the media. And if you’re like me, you’re still left with a question: What’s going to happen with Grímsvötn and its ash cloud? It’s impossible to predict precisely, but we do know that the ash is heavier and coarser than Eyjafjallajökull’s. We also know that the weather patterns are not the same, and that the eruption is unlikely to go on for as long. So we are probably not looking at a replication of Eyjafjallajökull’s disruption. (“Probably” means that nature can still confound the most sensible of predictions!)

All that said, Iceland is volcanically a highly active country and the lull in activity we’ve seen throughout the history of aviation is not something we should be taking for granted. I wouldn’t panic, though. But nor would I believe everything I read in the media.

The iPad and mobile: ‘How does information relate to movement?’

Last year, days after I took a buyout from The Guardian, I wrote a fun little rant about publishers and their delusional approach to the iPad. Since then Suw and I have bought an iPad and have tried out a number of apps, and one of those apps was The Daily.

The shortcomings of the interface and the app have been well covered. (The Daily, now with 20% more crash-tastic badness.) However, rather than focus on the poor interface or lousy execution, I’d like to focus on the bland content, something you don’t usually get to say about Murdoch content. You can say a lot of things about Fox or The Sun but you can rarely criticise Rupe for making boring content, until now. I’m from the US. I read a lot of news about home, as any expat does, but for the life of me, I don’t understand why I should care about 95% of the stuff that I have read in The Daily. It’s like a crappy CD-ROM version of USAToday on a day when they’ve given the staff writers the day off and have all the interns write about their pet issues. The Daily: The publication that doesn’t know what it is, and in digital content (or any content for that matter), meh never wins.

Michael Wolff, who is no fan of Murdoch, has a scathing piece in Adweek that raises the question of just how long the mogul will support The Daily.

Is The Daily the Heaven’s Gate of mobile? Not just expensive, but inexplicable. Not just a bomb, but an albatross.

Ranting aside though, Wolff points out something really key, thinking of the iPad as a mobile device:

Meanwhile, the mobile form expands and grows, driven by a basic question that most publishers have seemingly not asked: How does information relate to movement?

Moreover, how does the iPad relate to real-time information or time-shifted but frequently updated information? One of my favourite apps on the iPad is the FT. The ability to easily shift from live to downloaded content is amazingly functional. It is so useful that it has driven my use of the FT. In the couple of weeks that I used The Daily, neither the information or the format did anything for me. I’d rather have the more traditional site paradigm and the simple yet elegant functionality of the FT iPad app than the rather showy and useless interface candy of The Daily.

Publishers have rarely thought about how the web and now mobile change how information is consumed. They have a product that they want to sell, and they only see the web and mobile as different containers to sell it in. They don’t think much about how those platforms change the way we relate to information. It’s as if we were still in the early 1950s, producing radio programmes with pictures for TV. What is frustrating for those of us who have been doing this for a while – since the mid-1990s for me – is that we know how to tell stories on the web. We know how digital and mobile change ways that stories can be told.

That said, I’m actually quite optimistic. The iPad has renewed interest in novel digital story telling and design, and I’m even more enthusiastic about HTML5 which opens up all kinds of possibilities for not only the iPad but the desktop, smart TVs and other new devices. However, it’s going to take some digital thinking rather than thinking that sees digital as just another vehicle for print.

Linking and journalism: The Workflow issue

There was an interesting discussion about linking and journalism amongst a number of journalists in North America. Mathew Ingram of GigaOm and  Alex Byers, a web producer for Politico in Washington, both collected the conversation using Storify. It covers a lot of well worn territory in this debate, and I’m not going to rehash it.

However, one issue in this debate focused on the workflow and content management systems. New York Times editor Patrick LaForge said:

[blackbirdpie url=”https://twitter.com/#!/palafo/status/70668697051725824″]

Workflow and how that is coded into the CMS is a huge issue for newspapers. For two years when I was at The Guardian, most of my work was on our blogging platform, Movable Type. Movable Type had scaling issues, as did almost every blogging platform back in 2006 when I started at The Guardian. However, Movable Type and other blogging platforms also make it ridiculously easy easy to create content – rich, heavily linked multimedia content. It was so much easier than anything I had ever used, especially when coupled with easy to use production tools such as Ecto and MarsEdit.

However, due to the scaling problems with Movable Type, The Guardian moved its blogging onto its main content management system. We didn’t have a choice. We had outgrown Movable Type. However, I’m being diplomatic in the extreme when I say that the new CMS lacked the ease of content creation and publishing that I had grown accustomed to with Movable Type and WordPress. Furthermore, there was an internal conflict over whether to use the web tools or the print tools to create content, and in the end, the print tools won out. The politics of print versus the web played out even in the tools we used to create content. That was an even more jarring move. It was like trying to create a web story with movable type, and I’m not talking about the blogging platform.

Most newspaper CMSes are more WordPerfect from the 1980s than WordPress. That’s why you have journalism outfits setting up blogs on Tumblr. Creating content on tools like Tumblr is like falling off a bike instead of trying to write caligraphy with a telephone pole. You can build a robust, advanced content management system without making the tools to create content so piggishly ugly, bewilderingly confusing and user surly. However, newspapers code their workflows into their CMSes. The problem is that their workflows aren’t fit for modern purpose.

Newspaper newsroom workflow is still print-centric, apart from a very few exceptions. The rhythm of the day, the focus of the tools and much of the thinking is still for that one deadline every day, when the newspaper goes to the presses. From this post by Doc Searls on news organisations linking to sources (or not linking as the case may be), see this comment from Brian Boyer about his shop, The Chicago Tribune:

At the Chicago Tribune, workflows and CMSs are print-centric. In our newsroom, a reporter writes in Microsoft Word that’s got some fancy hooks to a publishing workflow. It goes to an editor, then copy, etc., and finally to the pagination system for flowing into the paper.

Only after that process is complete does a web producer see the content. They’ve got so many things to wrangle that it would be unfair to expect the producer to read and grok each and every story published to the web to add links.

When I got here a couple years ago, a fresh-faced web native, I assumed many of the similar ideas proposed above. “Why don’t they link?? It’s so *easy* to link!”

I’m not saying this isn’t broken. It is terribly broken, but it’s the way things are. Until newspapers adopt web-first systems, we’re stuck.

Wow, that’s a really effed up workflow by 2011 standards, but a lot of newspaper newsrooms operate on some variation of that theme. It’s an industrial workflow operating in a digital age. It’s really only down to ‘that’s the way we’ve always done it’ thinking that allows such a patently inefficient process to persist. Seriously, has no one really thought that it’s easier to export plain text from HTML than to bolt on a bunch of links, images and the odd YouTube video to a text story destined for a dead tree? Want to cut some costs and increase the quality of your product? Sort out your outdated industrial workflow, save a lot of money, hire more journalists and improve your web and print products. Simples. (Well, after sorting out your workflow, hire a digital sales team, and then you can hire even more journalists. That’s a post for another time.)

LinkedIn as a source of traffic

Earlier this year I did some work for OldWeather.org, a citizen science project that is transcribing weather and other data from old ships logs. As part of their website progress assessment, I hand-analysed their web traffic referrers to see where people were coming from and whether we were reaching our core communities. One of the things I found was that whilst Facebook sent over two orders of magnitude more visitors than LinkedIn, LinkedIn was responsible for much higher quality visitors. Visitors from LinkedIn visited an average of 17 pages per visit, staying for 34 minutes with a bounce rate of 33%, compared to Facebook’s 1.8 pages per visit, 1:41 minutes on site, and 79% bounce.

The quality difference is stark and indicates that for OldWeather.org, perhaps a bit more promotion in LinkedIn might be in order. But is LinkedIn capable of the same volume of visitors that Facebook can provide? Facebook still provides a far higher overall share of time on site compared to LinkedIn, although on some sites (this one included) a single page view isn’t all that useful in terms of the site being able to fulfil its remit. Lots of single-page-view visitors aren’t as valuable as fewer multi-page-view visitors.

According to Business Insider, recent changes to LinkedIn has upped their ante quite significantly.

Out of nowhere, Business Insider started seeing real referral traffic from LinkedIn last month. […]

LinkedIn product manager Liz Walker tells us the traffic is coming from a bunch of sources – mostly new products like LinkedIn.com/Today, newsletters, and LinkedIn News.

It seems to me that, if these visitor quality stats and this new trend in volume hold true, then LinkedIn is successfully shifting from being a site often marginalised in social media outreach strategies to one that should be central. After all, with traffic it’s not just the volume you should be interested in but the quality of visitors as well.