MailOnline’s Martin Clarke: “Ooh, look at the badger with the gun, everyone!”

Index on Censorship examines the question, posed by MailOnline editor Martin Clarke, How does the Leveson Enquiry deal with the internet? But it misses the point that Clarke’s focus on the internet is simply diversionary tactics, designed to draw attention away from press conduct and point the finger at, well, it seems, pretty much everyone who’s ever used a social tool.

Said Clarke in his evidence:

Underpinning any press regulator as a statutory body effectively gives the state the power to licence newspapers and penalise ones that either do not join the body or ignore its rules. The only way to force bloggers to sign up as well would be to give that statutory body the same power to shut down blogs. If licensing newspapers is a severe restriction on free speech, this would be positively North Korean and the subject of mass internet protest. But even if we could get a law through, is it enforceable? Are we really going to drag Guido Fawkes off to the tower like his famous namesake for not joining the PCC?

Trouble is, the Leveson Inquiry wasn’t called because of bloggers hacking phones or Twitter users flouting a superinjunction as an act of civil disobedience or the impact one Tweet from Stephen Fry can have. It was called because of widespread corruption within the media, the political body, and the police, amongst others. It’s about the press becoming so powerful it could actually bully the government, and corrupt public officials and police officers. If a proper investigation was done, and Motorman taken to its logical conclusion, there’s every possibility that corruption would be found elsewhere as well.

I don’t think Guido Fawkes, on the other hand, quite has the money to go round giving police officers tens of thousands of pounds in return for juicy bits of information. There’s no evidence that Twitter users were hacking into anyone’s phones and publishing salacious comments based on what they found. And whilst every now and again a Facebook user turns out to be a racist shit, there’s no evidence of press-related criminality there.

Clarke, like so many of the newspaper editors, proprietors, managers and journalists we’ve seen giving evidence is keen to draw the fire away from his own publication and refocus it on somewhere else, preferably somewhere complicated. The internet makes a great new target because it is complicated, and because a new press regulator is going to have to think very carefully about how to deal with it.

But on the question of corruption, bribery, proto-blackmail, influence, graft, fraud, misconduct and criminal activity, the internet and its users is for the most part irrelevant. If you can find me a blogger or a Twitterer or a Facebook user who is guilty of media corruption, then that becomes a problem for Leveson.

In the meantime, existing laws are being used to deal with those who Tweet rape victims names, racially abuse others on Facebook, or write libellous blog posts. So it’s not like the Internet is quite the wild west it used to be. Turns out, in fact, that however complicated jurisdiction may be, there is jurisdiction.

The Internet is, in short, not a badger and it does not have a gun. Sorry, Martin.

The truth does not lie midway between right and wrong

There’s a habit amongst journalists to act as if there’s a continuum between opposing viewpoints and that the truth must therefore lie somewhere roughly in the middle, especially on health, science and certain tech stories. We saw it before with the reporting on now disgraced ‘scientist’ Andrew Wakefield and his very well debunked claims that MMR causes autism. And we’ve seen it regularly since.

Now the House of Commons science and technology committee has examined homeopathy provision on the NHS and has concluded that evidence shows homeopathy works no better than placebo and that the NHS should not provide or recommend it. The media seems to have decided that solid science is one end of a continuum of truths with homeopaths at the other end, and that it’s their job to shilly shally around in the middle and to present both sides in a ‘fair and balanced’ manner. To which I call bullshit.

Science isn’t about the balance of opinions but the balance of evidence. Evidence is bigger than any one person or research institute: it’s the findings of experiments that can be consistently repeated by anyone, anywhere with the right knowledge and equipment. When the evidence stacks up in favour of one theory, then that’s the theory that we must hold as true until/unless reliable and repeatable experiments lead us to refine or change it.

And that’s the thing. The reliable and repeatable experiments show that homeopathy performs no better than a placebo. Yet journalists seem intent on portraying this story as “MPs say one thing, homeopaths say something else, and who knows who’s right?!”. The Guardian, for example, uses a lot of fightin’ words (my bold):

To true believers, including Prince Charles, homeopathy is an age-old form of treatment for a wide range of ills. To most scientists, it is nothing more than water. Today the sniping between the devotees and the denialists became a head-on collision, as the House of Commons science and technology committee challenged the government to live by its evidence-based principles and withdraw all NHS funding from homeopathic treatment. …

…the money could be better spent, said the committee, accusing the Department of Health of failing to abide by the principle that its policies should be evidence-based. …

The Prince’s Foundation for Integrated Health countered the MPs’ attack by citing a peer-reviewed scientific study in the International Journal of Oncology which, it said, proved that homeopathic remedies were biologically active. …

But this isn’t a fight. It’s not seconds out, round one. Evidence points overwhelmingly to the conclusion that homeopathy doesn’t work.

The Guardian, along with many other news outlets, also gives quite a weight to pro-homeopathy voices as if all opinions are equal and that this is a debate. Ben Goldacre is collecting examples over on Bad Science. The BBC, for example, comes in for a lot of criticism in Ben’s comments:

fgrunta said,

I just saw this story break on BBC News. They brought on a Homeopath GP who just went and told I don’t know how many millions of viewers that the “evidence is clear” that homeopathy works and she then proceeded to start quote papers.

Grrr….

And:

ALondoner said,

An excellent report, nice to see that MPs can sit down, review the evidence and then say something intelligent.

On the other hand, The BBC (and some other news outlets) seem to be so obsessed with giving each side of the story, they make it sound like there is reasonable evidence for both points of view.

When someone is found guilty of a crime, journalists doesn’t put guilty in quotation marks. Nor do they pick a self appointed expert to rant about why that person was actually not guilty. So why doesn’t the BBC simply report that supporters of homoeopathy say it works, but all independent reviews shows that it does not.

Instead, we get “many people – both patients and experts – say it is a valid treatment and does work”, without at least caveating that with “but all systematic reviews show it is no better than placebo” and explaining who these “experts” are. Experts in giving homeopathy perhaps, but are they experts in telling whether it works better than placebo?

Just sent a few comments to the BBC via their well hidden complaints website:

https://www.bbc.co.uk/complaints/forms

The problem is, this is not a debate. The evidence is in: Homeopathy doesn’t work. Perpetuating the myth that taking ‘remedies’ which amount to nothing more than sugar pills or water that’s been shaken up a bit is potentially harmful. In fact, people die because they are convinced that homeopathy will work and so don’t seek proper medical attention. The media is complicit in those deaths because they help to keep the myth of homeopathy alive.

What I don’t understand is why journalists feels the need to create this false dichotomy in the first place. When astronomers discover a new planet orbiting a distant star, journalists don’t start looking for dissenting astrologers. When palaeontologists discover a new dinosaur, journalists don’t seek out creationists or intelligent design advocates to say that it’s all just a big trick by God. Why is it that in other fields they feel at liberty to talk utter hogswash and to ignore solid evidence?

This isn’t a science problem, or a science communications problems, this is a serious journalistic problem. This is journalists imposing a frame onto the story that is utterly inappropriate. This leads to a misrepresentation of the evidence and does a serious disservice to everyone who reads these stories and takes them at face value.

There is always some doubt in science, but this does not mean that science is unreliable or that opposing views are always as valid. In homeopathy, the level of doubt is very, very low, so low in fact that I feel perfectly happy saying “homeopathy doesn’t work”, because that’s the hypothesis that’s been proven correct time and time again.

Other scientific theories have more doubt and there we do need to be careful to be clear about what levels of confidence we should have. But this doesn’t mean that even in those stories that we need to give equal weight to for and against: we just need to be clear about how tentative or firm the science is.

And again, let me reiterate: This is important not just from a journalistic integrity point of view, but because misinformation kills. Actual people actually die. They actually get ill, actually fail to get the right treatment, and actually suffer because of it. Any action on the part of journalists that encourages people to believe in provably ineffective treatments is unethical. I just wish more journalists thought through what they are writing when covering stories like MMR and homeopathy.

John Mair demonstrates how to really not get it

I’m sure everyone’s fed up of the Jan Moir debacle that’s been occupying the UK Twittersphere for the last week, but I was made rather cross by this ill-judged and misinformed article by John Mair on Journalism.co.uk yesterday.

For those of you blessed enough not to have heard about the Jan Moir/Daily Mail controversy, suffice it to say that she wrote a hateful and homophobic article about Boyzone singer Stephen Gately, who died of a previously undiagnosed heart condition. Moir’s piece caused uproar amongst the online community, particularly on Twitter, causing some advertisers to remove their ads from the page and forcing Moir to apologise (in a manner of speaking). There have since been acres of print and pixel devoted to unpicking it all.

One such piece by John Mair, a senior lecturer in broadcasting at Coventry University, makes a number of mistake that I think are themselves worth unpicking.

Mair’s first mistake is to say that “blogosphere went mad seeking revenge”. Lots of people were very cross with Moir’s piece, but to dehumanise people’s reactions by lumping them all together as “the blogosphere” and then to trivialise the reaction as “going mad” and “seeking revenge” is to mischaracterise the entire episode. It implies that everyone who reacted to Moir’s piece somehow lost their sense of proportion and overreacted in a little moment of insanity. This is rather insulting – people were justifiably cross with Moir and the Mail and, whilst people were vociferous, to characterise them as seeking revenge is hyperbolic.

Mair’s second mistake is in his second paragraph where he implies that celeb-Twitterers Stephen Fry and Derren Brown organised the protests on Twitter and Facebook. That’s also not true – this wasn’t a crowd, baying for blood and lead onwards by the Twitter elite. Stephen and Derren were, like everyone else reacting to a rapidly spreading meme. There was no movement and they did not organise anything. They just helped the meme along. (It’s important to note that memes are like ocean waves – they don’t move the water itself, they move through the water.)

A little later on, Mair asks, “So how democratic are these manifestations of the virtual mob?”.

Ok, so what exactly is “democracy”? The dictionary on my Mac says:

democracy |di?mäkr?s?|
noun ( pl. -cies)
a system of government by the whole population or all the eligible members of a state, typically through elected representatives : capitalism and democracy are ascendant in the third world.
• a state governed in such a way : a multiparty democracy.
• control of an organization or group by the majority of its members : the intended extension of industrial democracy.
• the practice or principles of social equality : demands for greater democracy.

Looking at that list, none of those really apply to the phenomenon we observed. There was no organisation and no group ergo no members, unless – and I think this is where Mair gets confused – unless you label the people who complained, post hoc, as a de facto group that must therefore have organisers. That’s a rationalisation that doesn’t hold water – anger with Moir spread through Twitter organically: as one person Tweeted their disgust, others found out about the article and then expressed their own feelings. There was nothing orchestrated about it and the concept of ‘democracy’ cannot and should not be applied. A spontaneous expression of a shared opinion is not a democracy.

What about “mob”?

mob |mäb|
noun
a large crowd of people, esp. one that is disorderly and intent on causing trouble or violence : a mob of protesters.
• (usu. the Mob) the Mafia or a similar criminal organization.
• ( the mob) the ordinary people : the age-old fear that the mob may organize to destroy the last vestiges of civilized life.

Was there a mob? There certainly were a large number of people involved, but were they a crowd? Were they grouped together in one spot and intent on causing trouble or violence? I think it would be stretching the definition of ‘mob’ too far to use it to describe the people upset by Moir’s homophobia.

Mair then tells us that the internet is a double-edged sword, something which is undoubtedly true, although it is more accurate to describe the internet as neutral – neither good nor bad, and therefore capable of being used for good or bad. But the tone of his assertion implies that actually, he thinks the internet is baaaaad.

Now we get to the meat of the wrongness of this piece. Mair compares the expression of disgust at Moir with the hounding of Jonathan Ross and Russell Brand.

It can lead to interactivity and enrichment but it can also lead to bullying by keystroke. The zenith of that was the Jonathan Ross/Russell Brand row in the autumn of 2008 but nowadays broadcasters, especially the BBC, are facing ‘crowd pressure’ from internet groups set up for or against a cause or a programme; they are an internet ‘flash mob. With the emphasis, maybe, on the ‘mob’.

When Jonathan Ross and Russell Brand rang up the veteran actor Andrew Sachs on October 18 2008 and were disgustingly obscene to him about his grand-daughter, that led to a huge public row on ‘taste,’ mainly stoked by the Daily Mail and the Mail on Sunday.

Fuel was added to the fire through comments by the Prime Minister. The ‘prosecuting’ virtual group was the editorial staff of the Mail newspapers and its millions of readers in Middle England. In support of the ‘Naughty Two’, more than 85,000 people joined Facebook support groups. Many, perhaps most, had never heard the ‘offensive’ programme. Just two had complained after the first broadcast.

The BBC was forced after a public caning to back down, the director-general yanked back from a family holiday to publicly apologise, Brand and his controller resigned and Ross was suspended from radio and television for three months. The virtual mob smelt blood: it got it.

The Ross/Brand incident bears no resemblance to the Moir incident. Ross & Brand’s stupidity would have gone unnoticed by the vast majority of people had the Daily Mail and the Mail on Sunday (and a variety of other newspapers) not brought it to their attention and demanded that ‘something be done’ – that something, of course, being complaints to the BBC.

There was no “‘crowd pressure’ from internet groups” nor was there any sort of “internet ‘flash mob'”. There was only pressure brought to bear by the tabloids via the medium of the internet. The protest was not grass roots, it was orchestrated (oh the irony!) by the Mail and Mail on Sunday. Mair knows this, as he explicitly states it, yet still he uses this example as illustrative of the awfulness of the internet and the propensity of internet users to mobbish behaviour. Sorry, Mair, I call bullshit.

Mair then goes on to cite another irrelevant example, the protests over Jerry Springer; the Opera:

Fifty five thousand Christians petitioned the BBC to pull it from the schedules because of its profanity and alleged blasphemy. They engaged in modern guerilla warfare tactics to try to achieve their aim. Senior BBC executives had to change their home phone numbers to avoid that pressure. That campaign did not get a ‘result’. If Facebook had been in full flow then, the 55,000 may well have been 555,000 and the result very different.

The offended Christians were, again, organised. And again, it was not a spontaneous outpouring of dissatisfaction. They did not use “modern guerilla warfare tactics”, they used the communications tools open to them at the time, just like everyone else does. They didn’t succeed in getting the opera pulled, perhaps because the BBC felt that, in this case, the claims of offence were out of proportion. Would they have been successful had they been able to use Facebook? I would hope not, but the BBC’s spine does go through soft phases.

Mair concludes with:

This is activism by the click. It needs no commitment apart from signing up on a computer. It gives the illusion of democracy and belonging to a movement whereas in reality is it membership of a mob, albeit a virtual one? Is this healthy for democracy and media accountability or not?

Here Mair lays his biases bare. He may as well have said, “I just don’t like the whole idea of the audience having opinions and having a way to express those opinions. The fact that lots of people seemed to agree – quite independently – about how awful Jan Moir’s article was puts the fear of god up me, because suddenly I am accountable not just to my paymasters, but to my audience. Directly. And who’s going to protect me when these scary people with opinions come knocking at my door? Wasn’t it so much nicer in the old days, when the audience couldn’t answer back?”

Groups of people on the internet who all express a similar opinion are not de facto mobs. Expressing an opinion can be a part of democracy, but democracy is not simply the expression of opinion.

Mair’s piece is risible. He fails to understand Twitter, sees this as an opportunity to demonise the internet and draws false comparisons between unrelated incidents. Frankly, the media’s buggered if this is the prevalent attitude in our universities.

The plural of anecdote is not data

The City, and sections of the media, are getting a touch overexcited by a “research note” written for Morgan Stanley by Matthew Robson, a 15 year old on work experience. The Guardian said:

The US investment bank’s European media analysts asked Matthew Robson, an intern from a London school, to write a report on teenagers’ likes and dislikes, which made the Financial Times’ front page today.

His report, that dismissed Twitter and described online advertising as pointless, proved to be “one of the clearest and most thought-provoking insights we have seen – so we published it”, said Edward Hill-Wood, executive director of Morgan Stanley’s European media team.

“We’ve had dozens and dozens of fund managers, and several CEOs, e-mailing and calling all day.” He said the note had generated five or six times more responses than the team’s usual research.

The research note itself can be read on The Guardian’s site.

I’m going to start by giving Robson the kudos that he deserves. He has written a very well thought out piece which describes the media habits of him and his friends. In no way do I want to criticise a teenager for being thoughtful, engaged and articulate.

But one has to put this research note into context: This is one teen describing his experience. It is not a reliable description of all teens’ attitudes and behaviours, yet both Morgan Stanley and the media seem to be treating it as if Robson has Spoken The One Great Truth. “Twitter is not for teens, Morgan Stanley told by 15-year-old expert” coos The Guardian. “Note by ‘teenage scribbler’ causes sensation” says the FT in astonishment.

Neither Morgan Stanley nor the media seem to be able to tell the difference between anecdote and data. This “research note” is more note than research, and it should not be taken to be representative of all teens. A teenager in a rural setting, or in an inner city estate, or one who feels socially excluded from web culture will have a very different experience than a teen who’s well-connected enough to get himself an internship at Morgan Stanley.

What is worrying about this is not Robson’s note: He’s simply doing what most teens (and most adults) do, which is to extrapolate from his own and his friends’ experience to form generalisations about the world around him. It’s a very human thing to do, but the important thing about businesses like Morgan Stanley, and the journalists who write about them, is that they are supposed to be able to tell the difference between data and generalisations. Yet they don’t seem able to sort the wheat from the chaff. It seems yet another symptom of the group-think in the media and financial sector that led to the Great Recession, rather than an indication that we have learned anything from it.

Sarah Perez on ReadWriteWeb says:

Matthew Robson, a 15-year-old intern at analyst firm Morgan Stanley recently helped compile a report about teenage media habits. Overnight, his findings have become a sensation…which goes to show that people are either obsessed with what “the kids” are into or there’s a distinctive lack of research being done on this demographics’ media use. Robson’s report isn’t even based on any sort of statistical analysis, just good ol’ fashioned teenage honesty. And what was it that he said to cause all this attention? Only that teens aren’t into traditional media (think TV, radio, newspapers) and yet they’re eschewing some new media, too, including sites like Twitter.

Well, research has been done. danah boyd has done some excellent research into the use of the web by teens – it’s her speciality and she’s one of the foremost experts in this area. Her research would help Morgan Stanley understand the teen demographic much more clearly than any single anecdote, however well written, ever can. The fact that they haven’t ever had a clear insight into the teen demographic would seem to imply that their existing researchers and analysts aren’t doing their jobs properly. The information is out there, a lot of it is freely available, and all that remains is for someone to read it and write the report.

This story also feeds into the concept of the ‘digital native’ which, as I’ve blogged before, is a very poor way to talk about a very diverse section of the population. But because this report fits in with widely-held assumptions about teens and technology – not only does it describe ‘digital natives’, it’s written by one too – it’s immediately accepted without query or question. Morgan Stanley and the media both seem to be more interested in having their biases validated than they are in exploring the evidence to see where it leads them. Sadly, it seems that neither have been spending enough time watching CSI and drawing from it key lessons about assumptions, evidence and how to draw conclusions.

If I had been Matthew Robson’s boss at Morgan Stanley, on receiving his report I would have praised him on his good work and then asked him to look for evidence to either support or refute his points. That would have been an interesting exercise for Robson, and would have led to a research note that actually had some research in it. Instead, Morgan Stanley seem to have taken his work as gospel. I wonder why. Perhaps it was because they thought that, as a 15 year old, he’s privy to the inner workings of mysterious teen minds, a High Priest in the Digital Native Mythology?

If I relied on Morgan Stanley for anything, I’d be rather concerned right now regarding their lack of critical thinking.

The nature of work – visible, invisible, and that doesn’t look like work

As I mentioned in my last post, Proxies for productivity, and why no one trusts teleworkers, I think one of the big problems facing business right now is the fact that they do not understand what work is, and what it isn’t. I outlined the four most common proxies for productivity that I’ve noticed at play in the businesses I have observed:

  • Number of emails received
  • Amount of time spent in meetings
  • Length of the work day
  • Distance travelled and jetlag suffered

Now this is not to say that email, meetings, long days and travel aren’t sometimes needed, or don’t form an important part of what work is in the knowledge economy. A small number of emails are important; meetings can occasionally be very productive, not just from the point of view of making decisions but also for the high-value relationship building that can only be done face-to-face; sometimes long days can be not just necessary but also productive; and every now and again you really do need to get on that plane.

I’m keen not to throw the baby out with the bath water, but to make the point that whilst sometimes these activities are genuinely important, mostly they are not. When they have become goals in and of themselves, instead of a means to achieve a goal, they have shifted from being useful tools to proxies for productivity.

Think about the playground marbles champion, who holds his position primarily because he’s managed to win, buy, steal or otherwise acquire a very large collection of marbles, rather than because he’s actually good at playing the game. People who believe that they are working hard because they get lots of email, do lots of meetings, always work long hours and travel a lot have done nothing more than fill a very large bag full of marbles.

So if all of this activity, this busy-ness, is only rarely actual work, what is work? For a couple of years now, I’ve been in the habit of thinking of work as falling into two categories, one easy to define, the other a lot less so.

Visible Work
This is all of the stuff that other people can see you doing. Obviously, the proxy activities fall into this category – if they weren’t very clearly visible to your peers and your managers, they would be no use as proxies. Document writing, coding, designing, phone calls, conferences, presentations… the list is almost infinitely extensible.

These are things that easily answer the question, “What is Alice doing?” They are the knowledge economy equivalents of manufacturing industry work: behaviours that result in something, whether tangible or digital, that is easily described.

Invisible Work
One of the big problems with working in a knowledge job is that much of your work is done in your head. There is no way to embody what goes on in your brain, no matter how important it is in helping you to attain your goals. Indeed, a lot of what knowledge workers do is very creative, and creativity needs to be fed. That means knowledge workers can often end up doing things that, to the uninitiated, look like anything except work. Talking to colleagues around the water cooler, gazing off into the middle distance, getting up from your desk to go sit somewhere quiet… thinking.

When I worked as a web designer for PwC, back before the Great Crash, the head of our studio and our lead designer both recognised the importance of invisible work (although I doubt they conceptualised it like that). We were encouraged to spend time fiddling about with new ideas, we were taken on days out to the Science Museum for inspiration, we could talk to each other and do whatever we needed in order be creative.

But despite the fact that thinking is an essential part of knowledge work (it wouldn’t be knowledge work if it didn’t involve thinking, it’d just be… information work or data work) we give people very little time to pause, reflect, and consider their actions. It’s all go go go, all about the visible work. Because consideration looks far too much like inaction from the outside: the real work is going on inside your skull, and short of hooking everyone up to brain scanners, there’s no real external sign that anything at all happening in there.

So the knowledge worker either has to find a way to feign work in order to get a moment to think, or has to do it on their own time, mulling things over on the commute to work or under the shower. The deep, intense conversations that spark a revelation have to happen at lunch, or down the pub, or not at all, because “chatting” is skiving. (Unless, of course, it’s scheduled in the diary in which case it could be a meeting… but then your brain falls into meeting mode and, after years and years of bad experiences in meeting rooms, your creativity slinks off to a corner and quietly dies.)

Now, after a couple of years of thinking about this and watching what goes on around me, I want to add a new category to the list:

Work That Doesn’t Look Like Work
The internet has had a very bad rap over the last ten years. One person I know tells the story of how he used to do research for his job using internet tools, primarily a browser and Skype, but started to notice a chill in the work atmosphere. When he asked a colleague what was going on, she replied “Well, we see you using a browser, and… well… we only use the internet for booking holidays and buying stuff on eBay, so we assume you’re doing the same thing.”

People – peers and managers alike – too often equate the browser with skiving, an accusation which as never been fair. When I was a music journalist back in the late 90s, I could not have done my job without using the internet for research. It was an invaluable tool then and it’s an even more invaluable tool now. I cannot imagine how I could do my job without having the internet to provide not just information, but inspiration. Indeed, I would not want a job that cut me off from the web. It would be like undergoing a lobotomy.

Of course, businesses have had intranets – accessible only through a browser – for years, but many of them were under-utilised and so awfully designed that they provided clear visual clues that, whatever it was that you were doing on that site, it wasn’t going to be fun. (And, therefore, had to be work… oh, what a sad indictment of our attitudes.)

But now it’s hard to tell at a glance whether the blog or wiki or social bookmarking site that someone is using is business-related or not. (Even the definition of “business-related” is getting very loose and floppy, with information and insight coming from all sorts of strange places.) And given that many businesses are now using these tools internally anyway, the browser is no longer the sad second cousin of “real” office tools, but rapidly becoming The Daddy.

The question is, will attitudes keep up? Truth is, they can’t afford not to.

If companies want to survive the current economic crisis, they are going to have to start getting a handle on what “work” really is, and in particular, address some of the old misconceptions that are still prevalent about the nature of work. They need change the way that they judge how hard someone is working and re-evaluate their concepts of productivity. Because right now, they are engaging in strategies that are actively damaging their ability to function and, indeed, to survive in these straitened times.

Proxies for productivity, and why no one trusts teleworkers

One of the biggest challenges facing business today is understanding the cultural changes that are required to truly put our manufacturing past behind us and face up to the new knowledge economy that we find ourselves in, like it or not. Over the years I’ve had a peak inside a wide variety of companies, everything from the five person start up to the multinational corporation and it’s blindingly obvious that we haven’t yet moved on from Taylorism, where managers are focused on create efficient processes and eradicating the opportunity for error. (The wrongness of a focus on process could be a whole series of posts on it’s own, but I’ll let it be for now.)

Most businesses are still treating work and workers as if they were producing physical objects like spanners and the fact that they are not actually producing anything tangible causes a serious problem when attempting to understand, let alone measure, productivity. What does it mean to be productive in a knowledge economy job? From a company perspective, there’s always the profit margin to give an overview of how well the business is doing, but on an individual basis, that doesn’t help us at all. How can we tell whether Alice’s work contributed to the bottom line? How do we know if Bob is working to the best of his capabilities or slacking off? How do we compare Carol to her co-workers, when she does something completely different to Alice and Bob?

Nature abhors a vacuum, and in the the absence of any genuine measures of productivity, we create our own ways of trying to understand how well we are doing compared to our colleagues. We are social creatures for whom status is important, so when we compare our own behaviours to those around us, we look for obvious measures of success and, thence, status. Those measures are like a sort of conceptual creole, the melding of the ideas of Taylorism and the realities of the modern job to create a set of proxies for productivity that are almost universally agreed upon, despite the fact that no one knows how or when that agreement occurred.

It’s important to note that all of these proxies come with a martyrdom complex – people boast about their sacrifices, expecting to elicit both sympathy and awe from colleagues. The bigger the sacrifice, the more sympathy and awe they get, and they get caught in a self-reinforcing cycle: the bigger martyr they are, the higher status they have, so the more motivation there is for sacrificing yet more.

The Email Proxy
More emails received indicates higher status.

This is probably one of the most common and damaging proxies for productivity and it almost seems to feed off a fame-like mechanism. We all know that being famous sucks, yet celebrity is still a big draw and many people who say they would eschew a chance to be famous would really, deep down, jump at the chance if it came along. We all know that getting hundreds of emails a day sucks, yet when our inbox gets that busy we feel proud of it, as if we are making a sacrifice for the sake of our increased status.

The Meeting Proxy
More time spent in meetings indicates higher status.

People simultaneously boast about their seven hour meeting marathon to colleagues, whilst also attempting to elicit sympathy about what a horrible day they’ve just had. Yet there is rarely any serious attempt to reduce the time spent in meetings or to avoid going to unnecessary ones. Indeed, in many cases, even people who are aware of how pointless some of their meetings are feel pressured to go anyway because they fear that their bosses will interpret their absence as “slacking off”, or because they don’t want to be excluded from any decisions that may get made in their absence. (They know that this is a proxy, but they also know that their bosses may not see it like that.)

The Time-At-Desk Proxy
A longer work day indicates higher status.

Not only do some people take a perverse pride in how long they end up staying at work, but they look down on those who do not spend (or seem to spend) an equal amount of time at their desk. Part-timers are viewed very negatively, and, indeed, the term ‘part-timer’ becomes an insult thrown at anyone who perhaps leaves early one day, or gets in late.

The Travel Proxy
More miles travelled to meetings, or more jetlag incurred, indicates higher status.

This proxy only really applies to a subsection of the workforce who have to travel for their job, but when it’s in place it’s just as powerful as any of the other proxies. Sometimes the travel is about commute time, or time spent on trains, but for some it’s really about how long you had to spend at the airport and how jetlagged you are. There’s a degree of machismo involved too, as people travel daft distances for short meetings through which they are barely awake due to the effects of exhaustion and jetlag. These experiences are perceived as demonstrating toughness and commitment, rather than the excesses they really are.

Firmly embedded
These proxies for productivity are so firmly embedded in business culture that I suspect they are used, whether consciously or not, as ways to gauge how well someone is doing and who deserves reward. Goals may be set at an annual review to help provide some sort of objective measure of how well you are doing, but can you really imagine someone who hardly ever used email, didn’t go to meetings, spent little time at their desk and rarely travelled, yet who met or exceeded all their goals, actually being popular with their boss? Anyone who behaved like that, no matter how effective they actually were, would be perceived as a slacker. And as we all know, perception is much more important than reality. That’s how real slackers get away with it – they look busy all the time, even though they achieve very little.

The irony about these proxies is that, of course, they are focused on the least productive ways you can spend your time. Email is a time sink, meetings are a waste, excessively long days decrease your productivity, and well, who really gets all that much done on a long journey? By allowing these proxies to stand, businesses are not only encouraging their staff to make false judgements upon their own and others’ productivity, they are also encouraging the very sort of behaviours that they should be working to minimise.
This is pretty bad news for social media, which disintermediates these proxies by reducing email, reducing the length and frequency of meetings, allowing people to be seen to be working even when not at their desk (and potentially reducing the amount of time they need to work to get the same amount of stuff done), and reducing the need to travel. Whilst these proxies are fixed firmly in people’s minds as a measure of their own effectiveness, then we’re going to have a very difficult time persuading people that it’s in their interests to adopt different and more effective ways of working.

A bigger problem, of course, is that most business leaders are in denial that there could be a problem with the culture of their organisation. One of the most dysfunctional companies I have ever come across, where decisions are arrived at seemingly at random, no one takes responsibility for those decisions, and the main mode of communication is shouting, also thinks it is the most egalitarian company out there. It’s not in the business leaders’ interests for them to examine or address the dysfunction of their business, because it’s that dysfunction that got them where they are, and keeps them there. If they suddenly had to become competent, well, that would be problematic.
Why no one trusts teleworkers
The great dream of teleworking hasn’t come true. We are not seeing companies rush to let their staff work from home, even though internet access and a phone is pretty much all that a lot of people need to do their job. I think the reason we haven’t seen a sea change in the way that we work is not because of the technology – I work from home most of the time, and even the basic tech I have on my Mac is enough for me to do my job perfectly well – It is because no one trusts the teleworker.

Three of the four proxies for productivity are removed in the case of the teleworker. The whole point of working from home is that you are not at your desk in the office, are not in meetings, and are not travelling. That leaves just email as a proxy, but for most managers that’s just not enough. They have never really sat down and thought about what their team actually does on a day to day basis, never considered how that might be measured, and what those measurements might mean (if anything). Instead, the forcible removal of three proxies simply leaves an uncomfortable hole in their subconscious reckoning of how hard someone is working, which allows in the fear that they are in fact not working at all, which then makes them reluctant to allow anyone that opportunity.

Social media can do a lot to help the teleworker connect with his or her colleagues, particularly applications that support declarative working (like declarative living, but, well, at work), helping make explicit the previously implicit acts of work that make up each working day. But again, the cultural barriers are high and it will take a determined and brave leader to change their business culture enough to allow teleworkers’ managers and co-workers to fully understand and trust them.

It’s not just newspapers

Last week, Kevin wrote about Alan Mutter’s Brain Drain post on how the journalists who most get this new digital era are the people least likely to be able to effect change within their organisations, and how many of them are looking to get out of the media because they can’t see a future for themselves there. Many voices from the journalist blogging community chimed in, and Kevin does a good job of linking to some of the most prominent posts. But I have something really very, very important to say to everyone who reads Strange Attractor who isn’t in the newspaper business.

It’s not just newspapers losing their brightest talent.

I have a lot of conversations with a lot of different people from a lot of different places, and recently a theme has started to emerge. The people who most clearly understand the way that the internet and Web 2.0 is transforming business are leaving jobs that frustrate them with companies that don’t get it, and are either finding other jobs with companies that do get it or are cutting loose completely and going freelance. And I’m not alone in this observation – Dennis Howlett blogs about a conversation he had with a Barclaycard developer who was profoundly unhappy with his job because there was no opportunity to innovate:

I was struck by the profound sense of frustration experienced by this person. Geeks invent stuff. They solve problems. They love puzzles. Stifling the ability to engage in those activities is anathema. It’s like sucking out the oxygen they need with which to thrive. Any time organizations do that to anyone, productivity plummets.

It’s not just geeks, either. On more than one occasion I have been brought in to talk to a company by someone who sits in the room with me and nods vigourously (but often silently) as I speak. When they do talk, I find myself nodding vigourously as well and it becomes clear that they are on the right track, that they understand social software and the changes currently being wrought. One day, I asked one of my contacts, “Why did you bring me in when you so obviously know what you’re talking about?” The response came, “Because they won’t listen to me – maybe they will listen to you.”

These people aren’t journalists or developers; this isn’t about a particular industry or job title. These are people who have a passion for the internet, who see how useful social tools can be, who just want to make small changes that might have a big impact, but they can’t, because management won’t let them. Whether that’s via direct commandments or through an anti-change, anti-innovation, anti-technology culture that’s been fostered by them doesn’t matter – the fact is that smart, innovative people aren’t being allowed to experiment, and they’re getting so frustrated by it that they are leaving to go elsewhere.

It’s not just newspapers that need to wake up to the fact that their middle managers and CXOs just might not have the right skillset and mindset to help them survive the digital era. As far as I can tell, that problem is rife in all industries. And any business that refuses to take notice of its own talent, (or even the knowledge of digital experts – who, it has to be said, may turn out not be white, male and middle-aged, and may even come from outside your sector), is going to find itself very much at the bottom of the heap as their brightest people go off to help more open and aware companies.

FOWA07b: Robin Christopherson

The art of attractive, yet usable (accessible) sites
Many sites that are very pretty but as soon as you do anything different, things go to pieces. Attractiveness shouldn’t be fragile, needs to be robust and be sure that site is nice under a wide range of conditions.

What has accessibility got to offer usability. The DTA is a law that talks about accessibility, and over 90% of sites do not comply. Sites that meet DTA standards are also easier to use for able-bodied people, not just disabled people.

Legal and General re-launch, spent £200k on accessibility, and found a huge upsurge in mainstream users as well, 30k hits extra in first use, just about increased platform compatibility. People wanted the site to work on their platform, regardless.

Speciality browser called HomePage Reader, free, renders the page in a text-only view. Has a screen reader that reads the text. Goes to Amazon, reader reads out all the text, which turns out to be all the text for the images. No skip to content, so is forced to listen to the [image …] tags, because none of the images are labelled.

Google Mail, Ajax application, but unless you are using the screen reader you won’t know that there’s some hidden text at the top of the page that suggest that screen reader users use the basic HTML version. Heavy use of javascript in a web app can be got round by providing a basic version too. Google felt they wanted to go that extra mile to provide a completely JS-free version, which you don’t have to do, but have to make sure that your JS doesn’t mess with screen readers.

Google Maps. There is a text-only version which isn’t sign-posted, the URL isn’t publicised, but it can do everything that Google Maps can do, so can look up florists in Melbourne, or whatever, all that functionality is there. Have implemented this parallel alternative, but there are so many platforms that could benefit from this text-only version.

Google Accounts, if you want to set up a new account, uses a ‘captcha’ image, which obviously doesn’t have an alt-tag. If you are trying to sub to Yahoo, forget it. But Google has two things – an audio link, but that’s really hard to hear, has not been able to understand the string, which is a bit self-defeating. Or there’s a link to contact Google and they will contact you personally by email to help you set up your account. Requires manpower to provide that service, but they obviously take accessibility extremely importantly, and it shows that they are going to be a force to be reckoned with – little things like this make it the default choice for people with a disability.

if you’ve got someone who tests your site who has a disability, if the site is optimised for them, it makes it much easier for mainstream users.

Everyone knows about captioning, and multimedia more important in Web 2.0, but with YouTube you can’t caption, you have to ‘hard burn’ them into the video. Go home tonight and watch TV with your eyes closed – you can’t follow the action. Audio description actually tells you what the action is in a video, and it makes video followable for the vision impaired.

The vision impaired are the hardest category to cater for. Window magnifier which makes the web page much much bigger, e.g. x5. Demos inconsistent navigation, pop-ups, etc. which all make it hard for magnification user to find their way around. Pictures of words get hard to read – get pixelated. Another reason for providing a basic text version.

General Motors website looks lovely. Might want to increase the text size, which is often hard-coded which means people can’t make it bigger so that they can read it. Have to reset the browser prefs to ignore the text size, which many people don’t know you can do, but it breaks the navigation with overlapping links, etc.

Vodafone did a better job, marked up all the headings and navigation, but whilst they have allowed people to change the text size, it corrupts the page, with text showing over other text. Colour is the same problem. If someone asks their browser to ignore specified text colours, content can totally disappear.

This isn’t just about websites, but also applications on the desktop. Shows a mystery meat navigation that requires hovering over images to get up a button a few pixels wide. Not available to people who can’t use a mouse, as no keyboard shortcuts.

Google Search. When you search, you get next page links, 1 2 3 4 5 … and that makes it possible for people with co-ordination issues or problems with using a mouse to click the links.

Vatican website with circular navigation, and tabbing from link to link, with default IE dotted line highlight, and the tabbing order is the most bizarre in the word. Need to make sure that sites work and are usable from the keyboard.

Flash breaks the whole screen reader experience.

Voice recognition is there out of the box in Vista, but Flash totally ‘de-supports’ voice recognition, so it can’t read the screen and you can’t use the voice recognition to choose a link and click it.

Hearing impairment and cognitive difficulties, e.g. dyslexia, learning disabilities, language difficulties (.e.g second language). With UGC there’s a lot of lack of awareness out there regarding the content they create. Shows a home page on MySpace and will see some prime examples of pages that are totally overloaded with graphics and totally inaccessible. Falls on the heads of the MySpace and Facebooks of the world to flag those requirements well, in registration process or the toolkit for creating the pages.

Finally, there was a problem with the Olympic logo promo video that caused epileptic fits. There are tools that assess how likely a video is going to cause problems.

AbilityNet.org.uk

Abusing goodwill

I like to think that the world is based on goodwill. People are, generally speaking, nice and, by default, they will respect and help others. Certainly humans are fundamentally and inescapably social creatures that need each other on a minute-by-minute and day-to-day basis, and I think that being nice is one of the attributes that which fuels the reciprocation that makes helping someone else ultimately worth it for us ourselves.

I also think that the social web is an expression of the niceness that lubricates society. All the mores that have built up around blogging and wikis and sharing and Creative Commons are based on being nice: if you quote someone’s blog, it’s being nice to credit them; Wikipedia encourages everyone to be nice to newbies; sharing anything with strangers is an act of niceness in itself; and Creative Commons licences are predicated on the idea that people will be nice and respect them.

Whilst niceness isn’t universal – there are people who aren’t nice – it is a desirable attribute, so much so that niceness is taught and enforced from birth. I doubt there’s anyone reading this who wasn’t told as a child to “be nice” or to “play nicely”. Nice is good. We need nice.

This might explain why I get so cross when I come across examples of people, or especially businesses, not playing nice. But thanks to the internet, we now get to call out companies who, whilst sticking to the letter of the law (or Creative Commons licence), are flagrantly abusing its spirit.

First up, Virgin Mobile Australia. They found a photo of two American girls on Flickr, and decided to use part of it on billboard and online ads, with the taglines “Dump your pen friend” and “Free text virgin to virgin”. Alison Chang was the girl featured, and her family is now suing, saying that the ad “caused their teenage daughter grief and humiliation”, and listing both Virgin Mobile and Creative Commons as defendants.

The photo in question was shared on Flickr using an attribution licence, meaning that technically, it could be used by any company for commercial purposes without requiring permission from the photographer (although the licence has now been changed to “all rights reserved”). But there are legal issues around this use, because, despite the liberal reuse licence that was used, Australia requires model release forms to be signed before an image can be used in an advert. The original photo is still on Flickr, as is a photo of the billboard ad.

But what really stings about this is that it’s just not nice. Whether or not the CC licence allowed for commercial reuse, what Virgin Mobile and their PR companies – Host and The Glue Society, according to blog, Duncan’s Print – did was really unpleasant. There was absolutely no reason why they couldn’t have used stock photos for any ads that needed to feature people, but instead they whipped free photos off Flickr without giving a moment’s thought to the impact it might have. And Virgin Mobile Pty Ltd.’s response is absolutely disgraceful. The AP quotes them as saying:

Virgin Mobile Pty Ltd., the Australian company, released a statement saying the use of the photo is lawful and fits with Virgin’s image.

“The images have been featured within the positive spirit of the Creative Commons Agreement, a legal framework voluntarily chosen by the photographers,” the statement said. “It allows for their photographs to be used for a variety of purposes, including commercial activities.”

The “positive spirit” of Creative Commons is about constructive reuse, and this cocky attitude that they can take someone’s image and insult them publicly in the name of advertising is repulsive. Virgin and its PR company might not have broken the letter of copyright law, but they certainly showed no thought or consideration for Alison Chang.This sort of behaviour is just not nice, and Virgin should be castigated for it.
Now, on to Jo Jo, whose story is much more straightforward. Jo Jo writes about and photographs food on her blog Eat2Love, the trouble is, journalists keep lifting her ideas – both in terms of the things that she writes about and the way that she styles the food she photographs. Whilst this has been going on, according to her, since January, the straw that broke the camels back for her was seeing photographs that looked very much like hers on the cover of Gourmet magazine. And it’s not just Gourmet. In an email to me a couple of days ago, Jo Jo names another two publications and talks of a “major” website that poached her work.

Again, the journalists, photographers and editors who are lifing ideas from Jo Jo aren’t breaking the law. You cannot copyright ideas, and I think that’s a damn good thing, otherwise nothing would ever progress, but regularly poaching someone’s ideas without ever acknowledging how heavily your work is influenced by them, or without building something original on top of their idea, isn’t a very nice thing to do. Journalists and photographers get paid for their creativity, and nicking someone else’s is a cheap shot.

I know people who would probably respond to this by saying “Well, tough – that’s how it goes when you put your stuff online for free, and you just have to suck it up,” but the sad thing is that it forces a binary decision to be made. Either Jo Jo puts up with being constantly ripped off, or she stops blogging. She decided to at the very least cut back on blogging – she’s written just two posts in the last two months, and has removed much of her archive:

90 % of the articles on this blog have been removed from view. what you are viewing are my write-ups of a few food events, and some restaurants.

I think that’s a real shame.

I have real sympathy for Jo Jo. I remember when I was a budding music journalist trying to get a commission from a very high-profile glossy music magazine. I was asked to fax them five different feature ideas, which I did. I was fobbed off by the editor with some feeble excuse as to why my ideas were no good, only to see a few months later one of them written up by someone else. Could I prove that it was my idea? No, I couldn’t, but it was distinctive enough that it pretty clearly was my idea. And that was really galling – I felt like I’d been played for a fool, and it was this sort of shitty behaviour that, along with the shitty pay, drove me away from music journalism.

Now, I think there’s a different thing going on when people release under Creative Commons, and make the choice to let others reuse their work, or when you can see a professional benefit from seeing your stuff redistributed by other people. But one of the main tenets of Creative Commons is attribution, saying where you got stuff from. When someone poaches ideas and doesn’t admit that they weren’t being original, that’s unacceptable.

The flip side is that it’s easier and easier to find out who is ripping whom off, and who’s not playing nice. Companies are going to have to learn that it’s just not worth their while being the schoolground shark that tricks the other kids out of their pocket money, because they are going to get found out. Even monkeys have a sense of what is fair play, and in the blogosphere, this innate sense is getting honed to a sharp point.

So my advice to any business intending to take advantage of all that lovely free content out there? Play nice.

New health fears over big surge in misleading and irresponsible science reporting

As soon as I saw the news that Dr Andrew Wakefield, the doctor who first alleged that there was a link between the MMR (measles, mumps and rubella) vaccine and autism, was to be brought before the General Medical Council on charges of professional misconduct, I knew that there’d be a media feeding frenzy. Despite lots of evidence that the MMR vaccine is safe and a distinct lack of evidence that there is any link between MMR and autism, journalists from every corner of the media insist on writing stories that lead the public to believe quite the opposite.

As the misconduct story broke, I saw stories on both ITV’s morning show GMTV and on the BBC, which managed to paint Wakefield as some sort of misunderstood hero and imply both that the link between MMR and autism was real, and that the ‘establishment’ was working to deliberately mislead the public. Both broadcasters used the same ‘reporting’ tactic – to interview the parents of autistic children, (along with the autistic children themselves and their non-autistic older brother, on GMTV), giving them the opportunity to promulgate their beliefs for five minutes, whilst a GP was given two or three sentences in which to respond. The last word, on GMTV at least, was given to the parents.

The pieces were incredibly biased, pitting beliefs against evidence, with the presenter clearly coming down on the side of the parents and, to all intents and purposes, dismissing the evidence and views of the medical experts out of hand.

This, by itself, is appalling. Beliefs are not evidence. Nor is suffering. No matter how much sympathy I have for children and adults with autism, symptoms by themselves are not evidence of the cause of those symptoms. And the fact that people are suffering these symptoms should not be interpreted as proof that studies finding no link between MMR and autism are ipso facto wrong. Believing things does not make them true – science is not some sort of Secret where the power of the mind can change reality.

What is true is that the media have exploited the beliefs of those who are suffering, and in doing so have denigrating the work of many respectable, honourable and diligent scientists in order to create outrage, because outrage sells. They have portrayed the flawed work of a minority of doctors – now charged with acting unethically and dishonestly – as David to the rest of the medical world’s Goliath, purely so that they can profit from covering the manufactured conflict.

Things got even worse on the 8th July when The Observer’s Denis Campbell wrote an article entitled “New health fears over big surge in autism”. The original article has been removed from The Observer website (i.e. Guardian Unlimited), so if you click that link all you’ll get is a 404 page, but the whole thing has been posted in the comments of Ben Goldacre’s blog, Bad Science. The chances are that the article has been pulled for legal reasons, but I’m getting ahead of myself.

Continue reading