TV Un-Festival: Chris Jackson – A community effort to improve metadata

I’ve been MCing the TV Un-Festival all day, and it’s been fun so far. Right now they are recording a podcast, which I’m not going to blog because at some point you’ll be able to listen to it yourself. Meantime, here’s a short burst of blog posts that I’ve put together throughout the day for your entertainment. (Note: There was no official schedule, so if I’ve misspelt names, please accept my apologies.)

Chris Jackson – A community effort to improve metadata
Chris is a freelance broadcast tech and strategy consultant, geek at heart, ideas for things that are more community based than big companies. At the TV Festival [of which this is the fringe event] hearing about Joost, wasn’t saying anything anyone in this room would be surprised about but it was news to the TV people. Big disconnect between us here and them there, who don’t know much about tech but do know about audiences.

Technically elegant ways that, say, torrents, work doesn’t make sense for the audience.

Two ways to watch TV – either watch what’s on, or you can dereferencing a pointer, i.e. look something up and make sure you are there. Bit torrent is not that simple for people to use, it’ snot something that works well after a long day. How can we make that process easier, that would turn it from looking through a long list of sites to find the torrent, to something that’s as simple as turning it on and see.

Would love to see:
– Permanent URLs
– List of locations for individual programmes, whether TV schedule, bit torrent, iPlayer, and gives the data as to what DRM there is on it, what sort of format it’s in.
– Wants that info to be flexibly improved, so if broadcaster wants to say “I have the definitive information” that it references the canonical.
– Wants the metadata to be simple, and standardised.

TV Anytime is comprehensive, but difficult to use.

Broadcasters should, ideally, be providing comprehensive information. But some broadcasters have different unique identifiers, e.g. the BBC has three for each programme. But a broadcaster might tell you the metadata but would never tell you where the torrent was. Community could step in and do this.

Need to:
– create a standard extensible format
– with an API
– data licensed liberally
– crowd sourced improvements

If this data was better, could make better clients, that could give you all the official locations, times etc. but would also give you all the other locations, and tie them together with a single URL. So people who have seen a programme could send a URL to someone who could then choose how they wanted to watch it, whether on BT, or iPlayer or old-fashioned TV.

Would be interesting then to gather information on how people like to access programmes, so you could see if they prefer to watch TV or use iPlayer or BT.

Risk with current systems is that you only ever get, say, the link to the RSS feed of Heroes.

Q: Broadcasters don’t see it on their interests, because the first thing that people do is tag where the adverts are and cut it out. And broadcasters don’t want to do anything that makes it easier. From our point of view, an extra person who watches it is an extra person, but they see it as a person that they couldn’t make money from.

CJ: Agree, but can do all sorts of other things.

Q: But this is the same as the Freeview programme scheduler.

CJ: What I’m saying is, why don’t we take that info, plus the torrent sites, and iPlayer, and put it all together.

Q: BBC say that “It’s illegal to do this”, but they have never prosecuted, and never will prosecute, but it’s illegal. The problem is that it’s technically possible, and no one has ever been prosecuted, so until the broadcasters either have a day in court and see whether it is illegal, no system will have any support from the BBC or any other broadcasters. EPG data is copyrights, sharing a programme onto torrent is illegal, so no one has been prosecuted. PACT, who represent non-BBC producers, and say “This is out content, so the BBC can only show it once and that’s all they can do”, and we all have a right to record and store on VHS, but transfer it over hte net and PACT say it’s illegal. So it’s not technical it’s a lawyer.

CJ: But there’s a distinction between content and metadata. My understanding is that you can republish the BBC metadata if it’s non-commercial, and Bleb.tv have only been threatened by ITV.

Q: There are all these legal arguments, so why do have to bring them together as a service, because that creates a legal target for litigation. How about a client that pulls together different sources and presents it, differentiating the sources, and lets people choose.

CJ: Yes, we shouldn’t keep it all in one place, but we should have a standard.

Q: So what we need is a common identifier for each programme.

CJ: Or multiple identifiers that are cross-linked. But yes, the identifier.

Q: So you could do it the barcode way, there isn’t a global organisation that organises barcodes, so that would be an easily distributable system.

CJ: i presume the names are URLs. But there are a whole bunch of existing systems, and we should be able to make it better. TVAnytime has programme groups (series), programmes, and segments of programmes, and programme locations (like a URI). If the data format addressed these types of ID (possibly except programme segments), should be able to take the URI, and use that to reverse look-up to get the metadata, and then pass around the URL that describes a specific programme, and then others can use that URL to find the programme itself. Not the only way of doing it that, but doesn’t seem to need permission, or to modify streams, etc. If we did this it might help the broadcasters change their minds.

Q: Are there parallels with the music industry and iTunes. Do we instinctively favour solutions that are too complex.

CJ: This is like an equivalent of MusicBrainz, but with links to all the places you can get the programme, not just a link to one source – Amazon in the case of MusicBrainz.

Technorati Tags:

New, new uses, or new to you?

A few weeks ago, I blogged some thoughts about innovation inspired by the close of The Economist’s Project Red Stripe, to which Jeff Jarvis responded. Jeff’s post was interesting, as were the comments, but one in particular from Malcolm Thomson stood out:

John Robinson says rightly “A protected group from within can come up with innovation, but unless they require no money or commitment, then they have to go before some decision-making person or body.”

But ‘unless they require no money…’ is of significance. Now that the tools of video journalism are so incredibly cheap, now that tuition with regard to the essential skills is so accessible (CurrentTV’s tutorials, etc.), the reporting/storytelling innovators must surely already exist in growing numbers.

Many months ago, I collaborated on a project looking at the future of retail. I’d been asked to take part in two discussion sessions by the company writing the report, and four of us sat around a big whiteboard thinking about trends in retail, and what the future might hold 5, 10 and 15 years out.

Our main conclusion was that the final recipients of this report, a global company who wanted to be prepared for the future, were woefully unequipped to even make the most of the present. Many of the most basic things that you’d expect such a company to do online were not being done and it was clear that, given the culture of the organisation, they were not likely to get done any time soon. It wasn’t so much that they weren’t Web 2.0, more than that they hadn’t even made it as far as Web 1.0 yet.

Much of the media – and other sectors too – struggle to understand the developments of the last 5 – 10 years, and find it difficult to work existing technologies into their business, even when there are clear benefits to doing so. But it’s not like things are actually changing that quickly, especially if you stay on top of developments. As Tom Coates said about the broadband vs. TV ‘debate’ last year (his italics):

These changes are happening, they’re definitely happening, but they’re happening at a reasonable, comprehendible pace. There are opportunities, of course, and you have to be fast to be the first mover, but you don’t die if you’re not the first mover – you only die if you don’t adapt.

My sense of these media organisations that use this argument of incredibly rapid technology change is that they’re screaming that they’re being pursued by a snail and yet they cannot get away! ‘The snail! The snail!’, they cry. ‘How can we possibly escape!?. The problem being that the snail’s been moving closer for the last twenty years one way or another and they just weren’t paying attention.

When businesses talk about innovation, they frequently mean “new” in the sense of “brand, spanking, no-one-has-ever-done-this-before new” or “first mover new”. Because they see the landscape as changing at an alarming rate, and they see innovation with the same blank-paper fear as the blocked writer, the whole thing becomes terrifying. Add to that the fact that they do not have a good solid grip on the state of the art as it is now, and you end up with a group of petrified execs standing on the brink of a chasm they fear is too wide and too deep to risk jumping, because the only outcome they can see is crash and burn.

Another type of innovation is the “new use” – taking tools that someone else has created and using them in an innovative way. How do you use all this Web 2.0 stuff that people are creating all the time and work it into your business? How does it bring value to your audience? What symbiotic relationships can you nurture that will enable you to do something different? This is the sort of innovation that I think the media needs to focus on.

Some are trying very hard to do this, some are just paying lip service, but many aren’t trying at all. Comments are a great example of a relatively new technology – it’s only been around for a few years – which the press have embraced en masse, but entirely failed to use effectively. The point of comments is that it allows writers to have a conversation with their readers, and for stories to continue to be developed post-publication, yet in the majority of cases comment functionality is slapped on to the bottom of every article – regardless of whether that article would benefit from comments – and readers are left to fight it out by themselves. Little of worth is added to either the articles, the publisher’s brand, or the commenters’ lives.

Creating a boxing ring online is not an innovative way of using comment technology, it is obvious, old-school, and short-sighted. It’s creating conflict to sell newspapers, increase hits or get more viewers for your TV slug fest.

Equally, using video to replicate television is like using Thrust to do the shopping – it makes no sense and is a massive waste of money. There are plenty of big hitters already doing TV rather well, and in an era of 24 hour rolling news, the last thing that we need is to replicate that online. Rather, the media should be using online video to do things that TV cannot do, to get places TV cannot go, to examine issues with the sort of depth and nuance that 24-hour rolling news couldn’t manage if their very lives depended upon it, to tell the stories that TV has no time for.

Where are these media outlets – newspapers or otherwise – who can honestly say that they are using even just comments and video truly innovatively? In so many cases I see new-school technologies used in old-school ways that transform it from groundbreaking to mundane. One case in point was Ben Hammersley’s BBC project about the Turkish elections. Yes, he was using Del.icio.us, and Flickr and he was blogging and using RSS, but with a distinctly old-school flavour that robbed the tools of their own potential.

A pneumatic nail gun can put nails through steel girders, but if all you do with it is build a garden shed, you might as well have used a hammer.

Finally, technology may not be new, but if it’s “new to you”, it can have real value. It used to be just blogs that provided an RSS feed, but then the tech press started using RSS, and now it has become standard across the majority of major news sites – no one sensible is without it. Other outlets might be using blogs or Del.icio.us or wikis, but that shouldn’t stop you from assessing how best you can use these tools yourselves.

But businesses are inherently neo-phobic, and this has resulted in the Great Race to be Second: the burning desire of companies everywhere to watch what others do and see if it succeeds before they follow suite. Neo-phobia also leads companies into a state of group-think, where they use technology only in the same ways that they’ve seen other people use it. RSS is another fabulous example of this – news outlets will only provide a headline and excerpt news feed, rather than a full feed, because they are scared that if people can read their content in their aggregator, they will not visit the site and if they don’t visit the site then valuable page views and click-throughs are lost.

Every now and again I see an article saying that full feeds increase click-throughs, the most recent being Techdirt, and their argument is compelling (their italics):

[I]n our experience, full text feeds actually does lead to more page views, though understanding why is a little more involved. Full text feeds makes the reading process much easier. It means it’s that much more likely that someone reads the full piece and actually understands what’s being said — which makes it much, much, much more likely that they’ll then forward it on to someone else, or blog about it themselves, or post it to Digg or Reddit or Slashdot or Fark or any other such thing — and that generates more traffic and interest and page views from new readers, who we hope subscribe to the RSS feed and become regular readers as well. The whole idea is that by making it easier and easier for anyone to read and fully grasp our content, the more likely they are to spread it via word of mouth, and that tends to lead to much greater adoption than by limiting what we give to our readers and begging them to come to our site if they want to read more than a sentence or two. So, while many people claim that partial feeds are needed to increase page views where ads are hosted, our experience has shown that full text feeds actually do a great deal to increase actual page views on the site by encouraging more usage.

But even if the assumption that partial feeds drive traffic to ads is correct, there’s still no excuse for having partial feeds, because ads in RSS have been around for ages. I don’t remember when Corante started putting ads in the RSS feed, but they’ve been doing it for ages and I have never had a single complaint about it. I don’t know what the click-through rates are compared to the ads on the site, but I’m sure that it would be possible to experiment and find out. It is undoubtedly possible to design a study that would give you the right sort of data to compare the effectiveness of partial, full, or full with ads feeds, but I’ve yet to hear of one.

And therein, I think, lies the rub. We don’t always know what will happen when we introduce new technology, but instead of experimenting, the majority prefer to go along with group-think and the old-school ways. They want innovation but only as a buzzword to chuck around in meetings – the reality is just too scary. Yes, there are mavericks who get this stuff, but they are frequently hamstrung by the neo-phobes, and have to spend their time pushing through small, bite-sized changes whilst they wait for the dinosaurs to die off.

Going to the Edinburgh TV Un-Festival?

Kevin and I are off up to Edinburgh tomorrow, for the TV Un-Festival – a fringe event to the Media Guardian International TV Festival. Ian Forrester has been putting a lot of hard work into getting the Un-Festival organised, no easy feat when so many other festivals and fringes are on at the same time!

So what’s an un-festival? Well, it’s like an un-conference, but more unlike a festival than it is unlike a conference. In other words, it’s a:

Day-long event which takes place on Saturday 25 August [and] will centre around the clash of the well established TV world and the constantly accelerating Internet world using the unusual un-conference format, where the cost of entry is participation.

A ton of interesting people and companies have signed up already, including the BBC, Google, BT Vision, Microsoft TV, P2P-Next, Joost, Trustedplaces.com, Mind Candy, MTV, Tapeitofftheinternet.com, Freenet, Blip.TV, Zattoo, and Licorice Film. In addition, there will be a number of darknet people coming, some of them with names you might know, like Ian Clarke, and others more secretive. Ooh, now that’s intriguing.

There are still places available so if you want to sign up, so do now! And if you’re coming, there’s a wiki for participants.

Google News, now with added comments

On Tuesday, Google quietly announced that they have added commenting functionality to Google News (US only), but with, as they put it, “a bit of a twist”:

We’ll be trying out a mechanism for publishing comments from a special subset of readers: those people or organizations who were actual participants in the story in question. Our long-term vision is that any participant will be able to send in their comments, and we’ll show them next to the articles about the story. Comments will be published in full, without any edits, but marked as “comments” so readers know it’s the individual’s perspective, rather than part of a journalist’s report.

[…] we’re hoping that by adding this feature, we can help enhance the news experience for readers, testing the hypothesis that — whether they’re penguin researchers or presidential candidates– a personal view can sometimes add a whole new dimension to the story.

Google are starting off with the very old-school tactic of asking for comments to be emailed to them, along with:

– A link to the story you are commenting on
– Your contact details: your name, title, and organization
– How we can verify your email address.

Because:

It is important that we are able to verify your identity, so please include clear instructions with your comment. If further information is needed, we will follow-up over email.

You can see a (rather dull) example of the new comments in action on this Google News link, a listing for an Arizona Republic article (syndicated from Bloomberg News), Kids: Food in McDonald’s wrappers taste better. The article says that a Stanford University study found that McDonald’s packaging makes pre-school children think that chicken nuggets, hamburgers and fries taste better.

The first comment is from Walk Riker, VP, McDonalds’s corporate comms, and is the sort of predictable corporate whitewash that you’d expect from McDonald’s. The second comment is from Dr Vic Strasburger, MD, Professor of Pediatrics, University of New Mexico, who discusses the problem with allowing advertising to small children.

Google have said that they’re limiting comments initially to “actual participants in the story in question”, but they seem to mean “anyone quoted or referenced in any of the stories in a cluster”, because Dr Strasburger is not mentioned in the Arizona Republic article, but is mentioned in a related Time article. That’s potentially quite a tangle of “actual participants” to sort out, and it looks like comments that refer to a specific article will end up getting lumped in with all the other comments on all related articles. Hardly the clear, transparent, relevant use of comments that we’ve become used to with blogs.

But I think that rule is deeply flawed, anyway. One of the things that frustrates me most about the media is their propensity to publish industry press releases seemingly in toto, without any balancing views. In these cases, it’s important that people not quoted in the story be able to comment in order to balance out poor or biased journalism. By only allowing the previously quoted to comment, Google News are, as the Daily Show’s Jon Stewart once said about MSM blogs, “giving a voice to the already voiced”.

I also wonder about the feasibility of scaling up full comment moderation. Google News is tracking 4,500 sources and linking to thousands of articles, just one comment on each is going to create a massive workload for the moderators. Even normal moderation of comments on a medium-sized media website is highly onerous, so much so that many news sites prefer the “report abuse” approach, rather than have to moderate each comment as it comes through. The volume of comments can be just huge, and if you add in verification of the commenter’s identity, you open up a whole new can of worms.

For starters, what type of verification are they going to do? Validating that a commenter is actually a human being is the most common sort of verification, and it’s pretty easy. KittenAuth can help you with that. Validating that a person is who they say they are is slightly trickier – people do have this nasty habit of pretending to be other people, and they can be really quite good at it. How far is Google going to go to make people prove that they are who they say they are, just so that they can leave a comment on a news article?

Once your identity has been satisfactorily verified by Google, and they’ve ascertained that you were at some point mentioned by a journalist in one of the articles that’s been clustered together as related, then you get to comment. I can see the logic behind this – Google thinks that if you are commenting under your real name, you’ll somehow be more responsible and provide a higher quality of comments. Sorry Google, it doesn’t work like that. Businesses will simply spin their corporate line, just like McDonald’s did, and individuals will still be capable of showing horrible lapses of judgement over what they think is suitable for public consumption. Putting a name against a comment doesn’t guarantee that that comment will be high-quality, or even factually correct. It just means it’s got a name against it.

I wonder if they are going to moderate the content of the comments too? I should imagine they would have to. If comments are to be “published in full, without any edits”, then they will have to delete anything which could be seen as libellous, defamatory or obscene, because otherwise they are at risk of legal action. What about comments that are just a bit sweary? I guess they’ll go too.

However, I’d bet good money that factually inaccurate, ill-concieved or woo-woo-based comments will get published without a problem, along with all the whitewash, propaganda, hype, disinformation and spin. And because, of course, only “participants” are eligible to comment, no one else will be able to debunk the nonsense that gets published.

Overall, I can’t say that I’m impressed by this – it’s both too messy and too orderly. It’s too messy because comments on different articles will all be bundled up in one heap and attached to the news cluster, thwarting any attempt to understand the real context within which the comment was made. And too orderly because only the incumbents get to take part – they are given the opportunity to make and remake their points, but the wider community doesn’t. Will this breed a good debate? I doubt it very much indeed.

Where’s your innovation?

This is a post I’ve been meaning to write for ages, but Neil McIntosh’s post about the closure of The Economist‘s skunk works, Project Red Stripe, has finally prodded me into action.

Project Red Stripe was a small team of six Economist employees who were given £100,000 and asked to “develop something that is innovative and web-based and bring it to market” within six months. They brought in outside experts to talk to the group and solicited ideas, from Economist readers and the wider blogosphere, which they then “evaluate[d …] against a set of criteria that the Project Red Stripe team have predetermined”.

Unfortunately, the idea that they came up with wasn’t really one that The Economist could see a way to earn any money out of. Project Lughenjo was described as:

[A] web service that harnesses the collective intelligence of The Economist Group’s community, enabling them to contribute their skills and knowledge to international and local development organisations. These business minds will help find solutions to the world’s most important development problems.

It will be a global platform that helps to offset the brain drain, by making expertise flow back into the developing world. We’ve codenamed the service “Lughenjo”, an Tuvetan word meaning gift.

Announced only four weeks ago, it has now had the plug pulled.

Neil, in his response to this turn of events, rightly questions whether ‘profitable’ is the only definition of success, and points out that innovation isn’t always radical and that a single innovation’s success can be, instead of based on it’s own performance in isolation, a result of its position within a group of innovative components that are profitable only in the aggregate. He says:

The lessons for news organisations? We needn’t make innovation hard by insisting the end product is always huge and/or high-profile. We shouldn’t think that innovation is something that can be outsourced, either to a small team or to a software vendor (the latter being a surprisingly popular choice for many newspaper publishers).

And we needn’t necessarily worry that we’re not having enough ideas. If you ask around, you’ll probably find it’s not ideas we’re lacking. What’s tricky (I know – this is my job) is capturing the best ideas, mapping them to strategic goals, and delivering them in a way that makes them successful.

To do that, you need innovators who understand the importance of baby steps and can deliver them, one after the other, regular as clockwork. And, unlike Red Stripe, you can make their life easier by making sure they’re not locked away from the rest of the business, worrying about a blank sheet of paper and a mighty expectation from the mother ship that, somehow, they’ll be able to see the future from there.

Neil also links to Jeff Jarvis, who says:

[T]hey ended up, I think, not so much with a business but with a way to improve the world. Their idea, “Lughenjo,” was described in PaidContent as “a community connecting Economist with non-governmental organizations needing help – ‘a Facebook for the Economist Group’s audience.’ ” It wasn’t intended to be fully altruistic; they thought there was a business here in advertising to these people, maybe. But still, it was about helping the world. And therein lies the danger.

I saw this same phenomenon in action when, as a dry run for my entrepreneurial course, I asked my students at the end of last term what they would do with a few million dollars to create something new in journalism. Many of them came up with ways to improve the world: giving away PCs to the other side of the digital divide, for example. Fine. But then the money’s gone and there’s not a new journalist product to carry on.

This gives me hope for the essential character of mankind: Give smart people play money and they’ll use it to improve the lots of others. Mind you, I’m all for improving the world. We all should give it a try.

But we also need to improve the lot of journalism. And one crucial way we’re going to do that is to create new, successful, ongoing businesses that maintain and grow journalism. We need profit to do that.

A very good point. Altruism isn’t really what’s needed, and it doesn’t necessarily equate to innovation (although in rare cases, it does – think of the $100 laptop project).

It’s not just newspapers
One thing that’s really important is to remember that the problems that The Economist have with innovation also face many other businesses in many different sectors. I see, for example, the PR industry just storing up trouble, the way that they have segmented themselves in to different agency types such as creative, print, TV, or online. I don’t think that any company can afford to segment its PR and marketing like that, let alone an entire industry. How can the situation where your creative team is separate from your online team – and those teams are run by different companies – be a good way to keep abreast of technology, to understand and grasp the opportunities? If a creative agency has an idea for online, how will they be able to implement it if online is run by someone else who is actually in competition. Now, maybe I’m misunderstanding the way that the PR world works, but that’s how it looks to me on the outside: like built-in failure.

(More…)

Continue reading

Thinking about procrastination, possibly too much

I’m reading a book at the moment called Stumbling on Happiness, by Daniel Gilbert (thanks to Derek Sivers for giving me his copy), which takes a look at how our brains remember the past, make sense of the present, and imagine the future. Our brains get this sort of thing wrong quite a bit, which means we end up being rather bad at predicting what will make us happy (or unhappy), and how strong our feelings of happiness (or unhappiness) will be. I’m not finished with it yet, so I may be missing out a key point, but I think that’s the gist.

One thing really leapt out at me, on page 18/19:

Forestalling pleasure is an inventive technique for getting double the juice from half the fruit. Indeed, some events are more pleasurable to imagine than to experience (most of us can recall an instance in which we made love with a desirable partner or ate a wickedly rich dessert, only to find that the act was better contemplated than consummated), and in these cases people may decide to delay the event forever. For instance, volunteers in one study were asked to imagine themselves requesting a date with a person on whom they had a major crush, and those who had had the most elaborate and delicious fantasies about approaching their heartthrob were least likely to do so over the next few months.

Is this perhaps a part of what procrastination is about? Some of the tasks that I put off longest are the ones that I have thought about in detail and which I have built up in my mind to being some sort of behemoth. It’s not necessarily that I think they are difficult or complex, but I have thought – or even fantasised, if you like – about them a lot. The fantasy might not necessarily have been ‘delicious’, but it would have definitely been elaborate even if the task itself wasn’t.

Of course, standard advice is to break tasks down into small, bite-sized portions – next actions that you can complete quickly and easily. Indeed, this is what many productivity and ‘to do list’ tools do – they allow you to break your tasks into small bits, and keep track of each one so that when you’ve done one bit you can move on to the next.

But perhaps that might end up being counterproductive, at least, if you put too much thought into it. If imagining doing something provides more pleasure than actually doing it, then it will seem preferable to delay doing it forever. At times like that, tasks that feels easier will be more attractive, such as reading RSS feeds, Twitter, IM or email (but not necessarily replying to the email – that itself can be something in which we’ve invested way too much imagination and which therefore takes on gargantuan proportions).

I think it’s important to know what you have to do, and to be careful to prioritise well so that you know both intellectually and emotionally that you’re doing what you need to be doing, or that you’ve done what you needed to have done. But I suspect that thinking about it too much, e.g. having your To Do List open in front of you all the time, might just turn out to be the straw that breaks the camels back.

One way to think less about what there is to do is to use Marc Andreessen’s approach:

On another topic, the tactic of each night, write down the 3 to 5 things you need to do the next day has struck some people as too simplistic.

That may be the case for some people, but I can’t tell you how many times I’ve arrived home at night and am at a loss as to what I actually got done that day, despite the fact that I worked all day.

And I also can’t tell you how often I’ve had a huge, highly-structured todo list in front of me with 100 things on it and I stare at it and am paralyzed into inaction (or, more likely, structured procrastination).

So a day when I get 3 to 5 concrete, actionable things done in addition to all the other stuff one has to do to get through the day — well, that’s a good day.

Writing down tonight what you have to do tomorrow gives you a good night’s sleep (hopefully!) during which you can forget about the longer To Do list, and clear your mind. I personally find that I get a lot more done when I can forget about everything else and just focus on what’s important.

But given the background noise of all the stuff that needs to be done but which isn’t important enough (yet) to claim my full attention, it’s very easy to feel like I’m drowning in a flood of equal priority tasks, until something genuinely important pops up and I can focus on that. If there isn’t something genuinely important, I search for it by repeatedly checking email and, if that doesn’t reveal anything pressing, I focus instead on easy but pointless things like Twitter or IM.

Instead of searching for that big thing, I shall pick five lower-priority things and just ruthlessly ignore everything else until those five things are done. If something urgent comes along, I shall deal with it, but I won’t search for it. I shan’t think too hard about all the other things that I have to do, and will attempt to stop myself fantasising about doing them or having them done, lest I end up ‘complixificating’ them to a point of self-paralysis.

Note to clients: Obviously, I’m very efficient when working on client projects. Money is a great motivator.

New health fears over big surge in misleading and irresponsible science reporting

As soon as I saw the news that Dr Andrew Wakefield, the doctor who first alleged that there was a link between the MMR (measles, mumps and rubella) vaccine and autism, was to be brought before the General Medical Council on charges of professional misconduct, I knew that there’d be a media feeding frenzy. Despite lots of evidence that the MMR vaccine is safe and a distinct lack of evidence that there is any link between MMR and autism, journalists from every corner of the media insist on writing stories that lead the public to believe quite the opposite.

As the misconduct story broke, I saw stories on both ITV’s morning show GMTV and on the BBC, which managed to paint Wakefield as some sort of misunderstood hero and imply both that the link between MMR and autism was real, and that the ‘establishment’ was working to deliberately mislead the public. Both broadcasters used the same ‘reporting’ tactic – to interview the parents of autistic children, (along with the autistic children themselves and their non-autistic older brother, on GMTV), giving them the opportunity to promulgate their beliefs for five minutes, whilst a GP was given two or three sentences in which to respond. The last word, on GMTV at least, was given to the parents.

The pieces were incredibly biased, pitting beliefs against evidence, with the presenter clearly coming down on the side of the parents and, to all intents and purposes, dismissing the evidence and views of the medical experts out of hand.

This, by itself, is appalling. Beliefs are not evidence. Nor is suffering. No matter how much sympathy I have for children and adults with autism, symptoms by themselves are not evidence of the cause of those symptoms. And the fact that people are suffering these symptoms should not be interpreted as proof that studies finding no link between MMR and autism are ipso facto wrong. Believing things does not make them true – science is not some sort of Secret where the power of the mind can change reality.

What is true is that the media have exploited the beliefs of those who are suffering, and in doing so have denigrating the work of many respectable, honourable and diligent scientists in order to create outrage, because outrage sells. They have portrayed the flawed work of a minority of doctors – now charged with acting unethically and dishonestly – as David to the rest of the medical world’s Goliath, purely so that they can profit from covering the manufactured conflict.

Things got even worse on the 8th July when The Observer’s Denis Campbell wrote an article entitled “New health fears over big surge in autism”. The original article has been removed from The Observer website (i.e. Guardian Unlimited), so if you click that link all you’ll get is a 404 page, but the whole thing has been posted in the comments of Ben Goldacre’s blog, Bad Science. The chances are that the article has been pulled for legal reasons, but I’m getting ahead of myself.

Continue reading

Social Tools for Business Use

I’m at Dave Gurteen’s conference today, talking about social software and Web 2.0. It has, as Dave’s last social tools conference was, been attended by some really interesting people. I’ve had some great conversations during the breaks which is always fun.

No live blogging today, but then, I don’t really need to because they are recording both audio and video, and the audio at least is up online already on the Focusbiz website. They have a number of pre-conference interviews up, including one with me, as well as the audio from the sessions – mine was called ‘The Beauty of Web 2.0: Tools that get out of your way‘. Do have a listen and let me know what you think.

Yahoo! Photos to close and delete photos

The news comes, via Thomas Vander Wal, that Yahoo! Photos is to close. I’m not a Yahoo! Photos user myself, but I think that this decision is wrong-headed and ill-conceived, in so many ways.

Thomas’ post is dated 7 July, and the email he had from Yahoo! Photos informs him that the service will close on 20 September 2007, at 9pm PDT. Assuming that Thomas didn’t miss an earlier email, that’s a little over two months’ notice – is that really enough time to notify all your users that you’re closing a service? Thomas says:

many of the people I know and run across that use Yahoo Photos rely on Yahoo Photos to always be there. They are often infrequent users. They like and love the service because it is relatively easy to use and “will always be there”. Many real people I know (you know the 95 percent of the people who do not live their life on the web) visit Yahoo Photos once or twice a year as it is where holiday, travel, or family reunion photos are stored. It would seem that this user base would need more than a year’s notice to get valuable notification that their digital heirlooms are going to be gone, toast, destroyed, etc. in a few short months.

I think it’s rather optimistic to think that everyone who’s going to be affected by this will find out in time to take action.

But let’s dig a little deeper, and go beyond the looming deadline to take a look at Yahoo! Photo’s help pages concerning the closure.

Yahoo! is giving people three “options if [they] want to keep [their] photos”. (I find the language here more than a little alarming as to me it implies that the default view is that people won’t want to keep their photos, and I’d bet money on that not being true.)

1. You can move your photos to Flickr, Kodak Gallery, Shutterfly, Snapfish or Photobucket. You can only move your photos to one service, and once they’ve been moved, options 2 and 3 become unavailable to you.

2. You can download your photos, but you can only download them one at a time. There’s no bulk download, so if you have a lot of photos you’re in for a tedious ride. Again, Yahoo!’s underlying assumption seems to be that people aren’t interested in keeping all their photos: “for many of you it won’t take much time to download your favorites”, as if your favourites are the only photos that matter.

3. You can buy an archive CD, but only if you’re a New Yahoo! Photos user. Yahoo! have partnered with Englaze to offer a price of $6.95 for 700mb of photos. Why not use DVDs, I wonder? They’ll take a lot more data than a CD and surely the aim here is to help users, not screw them? Although old Yahoo! Photos users have to either download one by one, or move services, so maybe screwing users isn’t that big of a deal for them.

You can choose all three options, if you qualify for the CD of course, and if you have few enough photos that downloading them one by one doesn’t cause you to tear your hair out.

Digging deeper into Yahoo!’s help pages causes further concern. Maybe this is just me being a bit sensitive to language, but if I were a Yahoo! Photos user, I’d want to know exactly what this means:

How long do I have to make a decision about what to do with my photos?

You will have until September 20, 2007 at 9 p.m. PDT to make a decision about your photos.

Of course, we encourage you to decide sooner rather than later, to avoid the last-minute rush. All users who choose to move to another service will be added to the queue for that service. So the sooner you make the decision, the sooner you’ll be have access to your photos at their new home.

“Added to the queue”? How long is it going to take people to have their photos moved over? And what happens if you do get stuck in the “last-minute rush”? Oh, wait, we get that answer over on another help page:

Be patient…the move can take several days or even weeks depending upon how many other users are in front of you in the queue.

I’m getting the feeling that this is going to be a sub-par experience for anyone moving their photos.

But hey, it’s ok, because Yahoo! get to blame the other services for any delays:

How long will it take you to transfer my photos to another service?

The move itself should not take long at all, it depends more upon the number of users ahead of you in the queue to be moved.

After you’ve opted to move to another service, you’ll be added to a move queue managed by that service. The queue will be managed on a first come, first served basis. When they get to your Yahoo! Photos account, they will copy your original resolution photos into the account you identified on their service and send you an email when the move is complete.

Although if it all goes wrong – and goshdarn, data transfer never goes wrong, right? – Yahoo! will be there to sort it all out. Or not.

What can I do if I have issues with transferring my photos or my transfer fails?

Each of these services should be able to successfully transfer all your photos and will be responsible for all issues once that transfer occurs. So if you encounter issues with your new account you should contact them directly.

But if you’ve received emails that some of your photos failed to make the move or that the service was unable to move your photo collection, then it’s likely due to more complicated data issues with your account. Any failures that are specific to a user’s account will be reported to Yahoo!

In these cases, the best alternative may be to download your favorites or purchase an archive CD (for users of the New Yahoo! Photos only).

I am presuming the lock that Yahoo! will put on your account once transfer has been initiated will be lifted if transfer fails, because if not, how will users be able to download their “favourites” or buy an archive CD? Of course, I’ve presumed before and been wrong.

Finally, if you’ve been using any of your Yahoo! Photos in any other Yahoo! products, then you need to know that:

Yahoo! Photos features in these services will all be going away soon, which means your photos will no longer be accessible from these services. And your photos will definitely not be available from these other services (or anywhere else on the Web for that matter) after Yahoo! Photos closes and all remaining photos are deleted and no longer accessible.

Oh dear god. They’ve really buried the lead here. Let’s just read that again, with some emphasis added:

Yahoo! Photos features in these services will all be going away soon, which means your photos will no longer be accessible from these services. And your photos will definitely not be available from these other services (or anywhere else on the Web for that matter) after Yahoo! Photos closes and all remaining photos are deleted and no longer accessible.

This was my big unanswered question. What will happen to the photos that haven’t been transferred before 20 Sept 2007? Answer: They will be deleted. Yes, that’s right, you’ve got two months to get your stuff, and then it’s toast.

This is absolutely astonishing. User’s stuff should be sacred – giving people just over two months to find out that their photos are going to be deleted is absurd. As Thomas said, people put their trust in companies like Yahoo!, who’ve been around for years, to still be around for years to come and this is a massive betrayal of that trust.

If a service has to be closed – and I recognise that from time to time, that’s inevitable – then it has to be done in a thoughtful, careful way. A staged process would be the best way to deal with such an eventuality, where uploading is closed first, followed by a period during which people can download, transfer or archive their images before the site is ‘fossilised’. But there should be a lot more time in between the emails warning people and the cessation of uploading. Deleting people’s photos should be verboten. (And that’s not just about the importance of users’ data, but also about the wider issue of causing linkrot, which is something that responsible service providers try to avoid.)

Is there even a good reason for Yahoo! to be closing Yahoo! Photos? Yes, it’s true that they bought Flickr, but Thomas points out that these two services have different userbases:

Having similar service running allows for one to be innovative and test the waters, while keeping one a safe resource that is familiar to the many who want stability over fresh and innovative. Companies must understand these two groups of people exist and are not fully interchangeable (er, make that they are rarely interchangeable). Innovation takes experimentation and time. Once things are found to work within the groups accepting innovation the work becomes really tough with the integration and use testing with the people who are not change friendly (normally a much larger part of an organization’s base).

It would have seemed the smart move to be mindful that Flickr is the innovation platform and Photos is the stable use platform. The two groups of use are needed. Those in the perpetual beta and innovation platform are likely to jump to something new and different if the innovation gets stale. The stable platform users often are surprised and start looking to move when there is too much change.

I agree with Thomas that Yahoo! Photos and Flickr users are not interchangable – to treat the former group as expendable is pure foolishness. It’s not like there aren’t business models to experiment with for Yahoo! Photos, so is it really necessary to close it?

Whilst this closure is at first not going to affect Yahoo!’s international users, they should get out whilst the going’s good. I see no way that Yahoo! Photos won’t be closing their international sites, so I find it absurd that they are still allowing people to sign up and upload photos to the UK site. But then, I find the whole thing absurd.