Metrics, Part 2: Are we measuring the right things?

(If you haven’t already read it, you might like to take a look at Part 1: The Webstats Legacy.)

Anand Giridharadas asks in the New York Times, Are metrics blinding our perception?. Giridharadas begins by talking about the Trixie Telemetry company which takes data about a baby’s naps, nappy changes and feed times and turns it into charts, graphs and analyses to “help parents make data-based decisions”. He then goes on to say:

Self-quantification of the Trixie Telemetry kind is everywhere now. Bedposted.com quantifies your sexual encounters. Kibotzer.com quantifies your progress toward goals like losing weight. Withings, a French firm, makes a Wi-Fi-enabled weighing scale that sends readings to your computer to be graphed. There are tools to measure and analyze the steps you take in a day; the abundance and ideological orientation of your friends; the influence of your Twitter utterances; what you eat; the words you most use; your happiness; your success in spurning cigarettes.

Welcome to the Age of Metrics — or to the End of Instinct. Metrics are everywhere. It is increasingly with them that we decide what to read, what stocks to buy, which poor people to feed, which athletes to recruit, which films and restaurants to try. World Metrics Day was declared for the first time this year.

But measure the wrong thing and you end up doing the wrong thing:

Will metrics encourage charities to work toward the metric (acres reforested), not the underlying goal (sustainability)? […] Trees are killed because the sales from paper are countable, while a forest’s worth is not.

The same is true in social media. Count the wrong thing and you’ll do the wrong thing. As Stephanie Booth says, in the second video in this post:

As soon as you start converting behaviours into numbers then people adapt their behaviour to have good numbers.

She goes on to say that some of her clients believe that the number of comments they have on a blog post is a measure of success, but because of this they become obsessed with getting people to comment:

So you’re going to write posts which make people react or you’re going to encourage people to have chatty conversations in your comments. That’s really great, you get lots of comments, but does it mean that what you’re providing is really more valuable? […] I don’t believe that more is always better, that more conversation is always better. It’s “Is it relevant?” And that’s something that we do not know how to measure in numbers.

If the key metric for assessing success is a simplistic one like ‘page views’ or ‘unique users’ or ‘comments’, the emphasis in your web 2.0 strategy will be on creating something populist instead of something that meets a business need.

Let’s say you’re in eCommerce and you sell pet supplies. Your business goal is not ‘get more people onto our website’, it is ‘get more people buying pet supplies from our website’. The two are very different indeed. A company that believes that they need to just lots and lots of people through the virtual door will focus on anything that might get them more attention and traffic. A company that understands they need to attract the right people will focus on communicating with passionate pet lovers who arrive at the site primed to buy.

This is why niche blogs can command higher advertising rates than general news sites. Advertisers can see that more of the people who click their ads will actually buy their products and are willing to pay more for these higher quality visitors.

Equally, let’s say you want to ‘improve collaboration’ internally and to that end you start a wiki. You start measuring activity on the wiki and focus on ‘edits per user’ as a key metric. You encourage people to edit more, but the quality and amount of collaboration doesn’t increase as you expected. Why? Because people learnt that changing a single typo boosts their ‘edits per user’ count and took a lot less effort than creating a new page, engaging with a co-worker or making a substantive change. Focusing on the wrong numbers changes the wrong behaviour.

In order to think about metrics, you need to know exactly what you’re using social media for. Figure that out and you’re halfway there.

Saatchi and Saatchi get it horribly wrong for Toyota

Tim Burrowes explains just how wrong Saatchi and Saatchi got Toyota’s Australian social media campaign. There are key lessons here not just for social media marketing, but for social media use across business.

  1. Do not assume that the agencies you work with, whether they are marketing, internal communications or PR, understand social media. The chances are high that they haven’t got a clue.
  2. Do not assume that your internal departments, whether they are marketing, internal communications, PR or any other department, understand social media. The chances are that they haven’t got a clue either.
  3. If your clue-free marketing/internal comms/PR department is working with a clue-free agency on a social media project, all your warning lights should be going off and your klaxons blaring. Danger, Will Robinson! Danger, Will Robinson!
  4. Social media is easier to mess up than to get right. And it’s easier to think you know what you’re doing when you don’t than it is to recognise when you don’t know what you’re doing. All that known unknowns and unknown unknowns, y’know?

The commonest excuse I hear about why companies aren’t going to bother learning about social media themselves is that they ‘don’t have time’ or ‘want results now’. Which is a bit like opening an office in a foreign country, without anyone on staff who can speak the language, and then demanding ‘results now’ whilst expecting nothing to go wrong.

With attitudes like that so prevalent, I expect that social media cock-ups will continue to entertain us throughout the foreseeable future. Maybe I need to start the social media version of FailBlog or ClientsFromHell.

links for 2009-12-16

  • Kevin: US cable television provider Comcast has rolled out an on-demand television and movie service that gives customers access to more than 2,000 hours of television and movies. The service used to be called TV Everywhere, but has now been renamed (I hate the industry term re-branded) Fancast XFINITY TV. This is available to Comcast customers who subscribe to both cable and internet. It's a bundling play, which makes sense. It's yet another piece of the on demand efforts. I'm sure that we'll see a lot of models before the market settles down.

links for 2009-12-15

Ushahidi and Swift River: Crowdsourcing innovations from Africa

For all the promise of user-generated content and contributions, one of the biggest challenges for journalism organisations is that such projects can quickly become victims of their own success. As contributions increase, there comes a point when you simply can’t evaluate or verify them all.

One of the most interesting projects in 2008 in terms of crowdsourcing was Ushahidi. Meaning “testimony” in Swahili, the platform was first developed to help citizen journalists in Kenya gather reports of violence in the wake of the contested election of late 2007. Out of that first project, it’s now been used to crowdsource information, often during elections or crises, around the world.

What is Ushahidi? from Ushahidi on Vimeo.

Considering the challenge of gathering information during a chaotic event like the attacks in Mumbai in November 2008, members of the Ushahidi developer community discussed how to meet the challenge of what they called a “hot flash event“.

It was that crisis that started two members of the Ushahidi dev community (Chris Blow and Kaushal Jhalla) thinking about what needs to be done when you have massive amounts of information flying around. We’re at that point where the barriers for any ordinary person sharing valuable tactical and strategic information openly is at hand. How do you ferret the good data from the bad?

They focused on the first three hours of a crisis. Any working journalist knows that often during fast moving news events false information is often reported as fact before being challenged. How do you increase the volume of sources while maintaining accuracy and also sifting through all of that information to find the information that is the most relevant and important?

Enter Swift River. The project is an “attempt to use both machine algorithms and crowdsourcing to verify incoming streams of information”. Scanning the project description, the Swift River application appears to allow people to create a bundle of RSS feeds, whether those feeds are users or hashtags on Twitter, blogs or mainstream media sources. Whoever creates the RSS bundle is the administrator, allowing them to add or delete sources. Users, referred to as sweepers, can then tag information or choose the bits of information in those RSS feeds that they ‘believe’. (I might quibble with the language. Belief isn’t verification.) Analysis is done of the links, and “veracity of links is computed”.

It’s a fascinating idea and a project that I will be watching. While Ushahidi is designed to crowdsource information and reports from people, Swift River is designed to ‘crowdsource the filter’ for reports across the several networks on the internet. For those of you interested, the project code is made available under the open-source MIT Licence.

One of the things that I really like about this project is that it’s drawing on talent and ideas from around the world, including some dynamic people I’ve had the good fortunte to meet. Last year when I was back in the US for the elections, I met Dave Troy of Twittervision fame who helped develop the an application to crowdsource reports of voting problems during the US elections last year, Twitter Vote Report. The project gained a lot of support including MTV’s Rock the Vote and National Public Radio. He has released the code for the Twitter Vote Report application on GitHub.

To help organise the Swift River project for Ushahidi, they have enlisted African tech investor, Jon Gosier of Appfrica Labs in Uganda. They have based Appfrica Labs loosely on Paul Graham’s Y Combinator. I interviewed Jon Gosier at TEDGlobal in Oxford this summer about a mobile phone search service in Uganda. He’s a Senior TED Fellow.

There are a lot of very interesting elements in this project. First off, they have highlighted a major issue with crowdsourced reporting: Current filters and methods of verification struggle as the amount of information increases. The issue is especially problematic in the chaotic hours after an event like the attacks in Mumbai.

I’m curious to see if there is a reputation system built into it. As they say, this works based on the participation of experts and non-experts. How do you gauge the expertise of a sweeper? And I don’t mean to imply as a journalist that I think that journalists are ‘experts’ by default. For instance, I know a lot about US politics but consider myself a novice when it comes to British politics.

It’s great to see people tackling these thorny issues and testing them in real world situations. I wonder if this type of filtering can also be used to surface and filter information for ongoing news stories and not just crises and breaking news. Filters are increasingly important as the volume of information increases. Building better filters is a noble and much needed task.

Reblog this post [with Zemanta]

Instapaper: Managing your ‘To Read’ list

I have this dreadfully bad habit of leaving lots of tabs open in my browser. Since the day Firefox introduced tabs, they have been my default way of “managing” large numbers of articles that I want to read. Whether someone has sent me a link by email or IM, or I spot something on Twitter, I’d open it up in a tab, glance at the headline and think, “Oh, I’ll read that later.” Then it would sit in my browser for weeks, sometimes months, whilst I did other stuff.

When Firefox grows to 60+ open tabs it becomes a bit of a resources pig and more often than not would crash horribly, maybe taking down the rest of the OS with it. I’d be forced to restart my Mac and when Firefox reopened I would feel compelled to reopen the 60 tabs that had caused it to crash in the first place. Sometimes I copy all URLs into a separate document and start afresh with an empty browser. I almost never go back to this list of URLs (which now goes back to 10th August 2006!).

I recently discovered Instapaper and now my workflow has totally changed. Instead of leaving tabs open, I open the article I want to read, save it to Instapaper, and close the tab. I can then read it either later on, in my browser, or I can read it on the Instapaper iPhone app. Once I’m done, I can archive the link, or I can share it on Tumbler, Twitter, Feedly, Google Reader, Facebook or via email. Instapaper also plays very nicely with Tweetie on the iPhone, so I can save links direct from my phone without having to star the Tweet and open it on my Mac later. The only thing I miss at the moment is that I can’t save links to Delicious, which is my current link storage facility.

It’s not often that an app revolutionises my reading in this way. RSS did it, years back. (If you’re curious, I use NetNewsWire which syncs to Google Reader and thence with Reeder on the iPhone – a fab combination.) But nothing has come close to changing how I consume non-RSS content until now.

The great thing is that I don’t feel the need to read everything that passes into view, but have a much more streamlined way of saving the link and assessing it later. And because Instapaper on the iPhone works offline, I can use some of that wasted time spent sitting on underground trains to flip through my articles. Win!

Just how gullible is the media?

Rather like our own Starsuckers, wherein the British media are shown not to give a fig about whether stories are true or not, Hungry Beast, a show on Australia’s ABC, recently put together their own hoax.


I don’t know if this shows that the media is gullible, or whether it just proves that they just don’t care whether what they print is true. If the former was true, we might stand a chance of turning things around. I think the latter is more on the money, which makes it a much more intractable problem.

links for 2009-12-14

Metrics, Part 1: The webstats legacy

Probably the hardest part of any social media project, whether it’s internal or external, is figuring out whether or not the project has been a success. In the early days of social media, I worked with a lot of clients who were more interested in experimenting than in quantifying the results of their projects. That’s incredibly freeing in one sense, but we are (or should be) moving beyond the ‘flinging mud at the walls to see what sticks’ stage into the ‘knowing how much sticks’ stage.

Social media metrics, though, are a bit of a disaster zone. Anyone can come up with a set of statistics, create impressive-sounding jargon for them and pull a meaningless analysis out of their arse to ‘explain’ the numbers. Particularly in marketing, there’s a lot of hogwash spoken about ‘social media metrics’.

This is the legacy of the dot.com era in a couple of ways. Firstly, the boom days of the dot.com era attracted a lot of snakeoil salesmen. After the crash, businesses, now sceptical about the internet, demanded proof that a site really was doing well. They wanted cold, hard numbers.

Sysadmins were able to pull together statistics direct from the webserver and the age of ‘hits’ was born. For a time, back there in the bubble, people talked about getting millions of hits on their website as if it was something impressive. Those of us who paid attention to how these stats were gathered knew that ‘hits’ meant ‘files downloaded by the browser’, and that stuffing your website full of transparent gifs would artificially bump up your hits. Any fool could get a million hits – you just needed a web page with a million transparent gifs on it and one page load.

This led to the second legacy: an obsession with really big numbers. You see it everywhere, from news sites talking about how many ‘unique users’ they get in comparison to their competitors to internal projects measuring success by how many people visit their wiki or blogs. It’s understandable, this cultural obsession with telephone-number-length stats, but it’s often pointless. You may have tens of thousands of people coming to your product blog, but if they all think it’s crap you haven’t actually made any progress. You may have 60% of your staff visiting your internal wiki, but if they’re not participating they aren’t going to benefit from it.

Web stats have become more sophisticated since the 90s, but not by much. Google Analytics now provides bounce rates and absolute unique visitors and all sorts of stats for the numerically obsessed. Deep down, we all know these are the same sorts of stats that we were looking at ten years ago but with prettier graphs.

And just like then, different statistics packages give you different numbers. Server logs, for example, have always provided numbers that were orders of magnitude higher than a service like StatCounter which relies on you pasting some Javascript code into your web pages or blog. Even amongst external analytics services there can be wild variation. A comparison of Statcounter and Google Analytics shows that numbers for the same site can be radically different.

Who, exactly, is right? Is Google undercounting? StatCounter overcounting? Your web server overcounting by a factor of 10? Do you even know what they are counting? Most people do not know how their statistics are gathered. Javascript counters, for example, can undercount because they rely on the visitor enabling Javascript in their browser. Many mobile browsers, for example, will not show up because they are not able to run Javascript. (I note that the iPhone, iTouch and Android do show up, but I doubt that they represent the majority of mobile browsers.)

Equally, server logs tend to overcount not just because they’ll count every damn thing, whether it’s a bot, a spider or a hit from a browser, but also they’ll count everything on the server, not just the pages with Javascript code on. To some extent, different sorts of traffic will be distinguished by the analytics software that is processing the logs, but there’s no way round the fact that you’re getting stats for every page, not just the ones you’re interested in. Comparing my server stats to my StatCounter shows the former is 7 times the latter. (In the past, I’ve had sites where it’s been more than a factor of ten.)

So, you have lots of big numbers and pretty graphs but no idea what is being counted and no real clue what the numbers mean. How on earth, then, can you judge a project a success if all you have to go on are numbers? Just because you could dial a phone with your total visitor count for the month and reach an obscure island in the Pacific doesn’t mean that you have hit the jackpot. It could equally mean that lots of people swung past to point and laugh at your awful site.

And that’s just web stats. Socal media stats are even worse, riddled with the very snakeoil that web stats were trying to mitigate against. But more on that another day.

links for 2009-12-12

  • Kevin: A fascinating infographic showing the use of various web 2.0/social web services. The one quick thing to see on this map is how popular photo sharing is, popular and universally so. Social networking also is very popular around the world. Microblogging and blogging shows a wide variation in use around the world. One thing that is really veruy interesting is how popular social media is in Asia compared to Europe. For instance, 60% of China's internet users upload photos but only 38% of British users. Some 46% of Chinese internet users have blogged but only 8.4% of British users. Wow. That's huge.
  • Kevin: My colleague Mercedes Bunz has a great interview with media consultant Gary Hayes on how social media services such as Twitter are now being used effectively to reflect the community that builds around television shows. This is a great point by Hayes: "Most broadcasters and programme-makers are really missing a trick in not having a presence in the real-time discussion that surrounds "their" show – they don't need to control the conversation, they just need to be a voice of "the creator" or represent the production." The same could be said for news organisations. It's not about control but about showing up.