After Wikileaks, how do we empower those in government who support transparency?

Suw and I have watched with some concern as the battle over Wikileaks has played out. For a time, both supporters and critics seemed to lose perspective about what is a very complicated and nuanced story. Hyperbole and complete lack of context in the coverage were sadly all too common. As someone who has covered technology and security issues for some time, the lack of a sense of history about the story is shocking.

History is important. Many of the debates that Wikileaks has brought to the attention of the broader public have been going on for much of the past 15 years. Debates about internet governance, Internet security, resiliency and censorship didn’t start with the recent release of documents and war logs by Wikileaks. To see these difficult issues trivialised and bled of nuance in the shouting match going on between pro- and anti-Wikileaks commenters is deeply troubling because making grey things black and white tends to lead to bad public policy.

Let me add some history from my own work. I covered the story of the release of secret files by former MI6 agent Richard Tomlinson. I worked with my colleague Paul Reynolds and tracked the documents allegedly containing the identities of MI6 agents as they quickly moved across the internet after their initial release.

I’ll quote my former colleague Chris Nuttall from this 1999 piece:

Former MI6 intelligence officer, Richard Tomlinson, who has threatened to publish state secrets on the World Wide Web, says the Internet spells the end for the world’s intelligence services.

His prediction came in an e-mail interview with BBC News Online. “I think the Net will eventually make intelligence agencies defunct as there will be a lot less secrets around the world that they can steal,” he said.

The British government tried to shut down the website where the names were published saying that it was putting the lives of its agents at risk. Even if they had, it was too late. Mirrors of the information were set up almost immediately, much faster in fact than in the Wikileaks case. Indeed, this case raised many of the same issues that Wikileaks has.

The issue of distributed denial of service (DDoS) attacks being used for political means isn’t all that new. I mean, come on, Twitter was supposedly the target of a politicaly motivated DDoS attack last year. Hacktivism isn’t new. I wrote about waves of site defacements and other attacks stemming from the Israel-Palestine conflict and after the collision of a US spy plane and Chinese fighter in 2001. Is it a danger of the 24-hours news cycle that history is wiped clean in every morning news meeting? Seriously, we will have no chance of tackling the issues our societies face if in the pursuit of the new-ness of news we immediately forget our past. Wikileaks is a pretty logical extension of events over the history of the internet and as a reaction to reflexive secrecy by governments around the world.

The history of US government transparency reformers

Without going over territory that has been well covered, it’s safe to say that Suw and I defend our right to remain conflicted about Wikileaks. (Suw says that she had wanted to write a blog post then, but shied away because of the abuse she saw meted out to anyone who expressed doubts about Wikileaks.) Fortunately, beginning a couple of weeks ago, leveler heads started to prevail and sort through some of the thorny issues and the competing values the case has raised. It is our hope that Wikileaks will lead to a mature discussion about government transparency. Clay Shirky makes a lot of very valid points when he says that competing democratic values are in butting up against each other with this case. As he says, in the short haul, Wikileaks probably operates as a needed corrective to government secrecy. However:

Over the long haul, we will need new checks and balances for newly increased transparency — Wikileaks shouldn’t be able to operate as a law unto itself anymore than the US should be able to.

I followed the Personal Democracy Forum’s event looking at Wikileaks from afar, and this comment from journalist and internet activist Rebecca McKinnon points to the long haul:

We need to think strategically about how to empower those in government who support transparency.

One of things lost in the ahistorical coverage of Wikileaks has been the recognition of those who have dedicated their lives to increasing transparency and decreasing the secrecy of the US government. The standout exception to this has been National Public Radio’s excellent programme, On the Media, produced by WNYC in New York. They have covered the complexities of Wikileaks with great nuance and intelligence, interviewing people with a range of views on the subject.

For instance, they have interviewed Steven Aftergood of the Federation of American Scientists, a source I often called on when I was based in Washington. He has worked for years (joining FAS in 1989) to declassify information from the US government:

“In 1997, Mr. Aftergood was the plaintiff in a Freedom of Information Act lawsuit against the Central Intelligence Agency which led to the declassification and publication of the total intelligence budget ($26.6 billion in 1997) for the first time in fifty years. In 2006, he won a FOIA lawsuit against the National Reconnaissance Office for release of unclassified budget records.”

He laid out his conflicted views about Wikileaks in a blog post and in an interview with On the Media. He said:

I would also say that in the U.S., the political process is still flexible enough that it is possible to put forward an argument for a change in policy and to see that change put into practice. We’ve seen more than a billion pages of historically valuable records declassified since 1995.

He has come in for a lot of criticism for being conflicted and critical of some aspects of Wikileaks work by those who truck no criticism of the organisation.

I also commend this recent interview with Tom Devine and the Government Accountability Project, who has been working for 31 years for legislation to protect corporate and government whistleblowers in the US. He talks about a number of cases where people have put their careers on the line to uncover waste, fraud and abuse. At the end of the interview, he thanks the interviewer for being interested.

To me, this is what Rebecca means when she says we, journalists and citizens in general, should strategically work to empower those in government working for greater transparency and also those organisations in our societies working for greater transparency. These battles for reform aren’t nearly as sexy as the Wikileaks story, but they are crucial in the long haul to our democratic societies.

Al Jazeera Unplugged: Kaiser Kuo on China

This is a live blog. It may contain grammatical errors, but I tried to be as true to the essence of the comments as possible?

Google’s announcement in January that it would shut down rather than continue to submit to censorship in China. It created a lot of column inches about foreign businesses operating in China and also about cybersecurity.

Kaiser believes that focusing on censorship and The Great Firewall in China is actually crippling our ability to deal with China. It’s a too convenient narrative. He used the image of Sergey Brin standing in front of the tanks in Tiananmen? Square. The Chinese internet is very robust and interesting and deserves attention in its own right. Quoting a Chinese scholar, he says that The Great Firewall is being seen as the Iron Curtain 2.0. The US government is sending very clear messages by referring to this The Great Firewall as another Iron Curtain.

We have this image of Chinese netizens as a group of skinny patriotic hackers or cosmopolitan aspiring democrats. Often, he says that the reality is somewhere in between. Chinese rarely go outside of China to see content. They very rarely bump into The Great Firewall, although Twitter, YouTube and other western sites are blocked. He finds that regrettable. They often bump into self-discipline censorship. Any site whatsoever will receive from any number of ministries what the provisos on content. They will redact words or ask you to close accounts. If companies don’t comply, they can face penalties all the way up to being shut down.

However, the focus on censorship obscures the development of technology and the internet in China. There are 404m internet users in China, more users than people in the US. There are 800m mobile handset subscribers in China. There are companies such as the instant messaging service QQ, which has 80% of all internet users. The number of accounts, because of multiple accounts by individuals, dwarfs the number of internet users in China.

The internet in China can be described more as an entertainment super-highway rather than an information super-highway. In the last two or three years, internet censorship has become more draconian in China. More sites have been blocked, and the restrictions on domestic sites has become more onerous. At the same time, in recent years, the internet has emerged as a full fledged public sphere in Chinese life, something that has never existed in China.

There is discussion about issues that are assumed to be off limits, but there is a great level of creativity to conduct these discussions. Officials at all levels of government are constantly taking the temperature of online opinion. You see policy decisions changing in response to online public opinion. A picture was taken and posted online of an official wearing a watch and smoking a cigarette “clearly out of his pay grade”. The official was jailed.

A woman was accosted by a couple of men and one was a party official. She stabbed the men and killed them, but there was such an outcry online that she wasn’t prosecuted. We are seeing a real development of a public sphere in China. When we focus solely on censorship, then we miss this phenomenon.

Everyone here wants to advance internet freedom in China, and Kaiser is quick to say that he supports it. But when the US government that it is dedicating millions of dollars to support internet censorship circumvention technologies, many people changed their minds about the official party line. Some liberal Chinese users came to accept the view that the internet was being used for imperialism. Planting the American flag on this operation might have backfired.

The development of the Chinese internet will eventually overwhelm censors. These freedoms should be taken from within. They cannot be granted from without.

He applauds private organisations and companies working to help create that change, but to paraphrase Kaiser, government involvement brings baggage.

Al Jazeera Unplugged: Joi Ito of Creative Commons

Again, this is a live blog. I’ll try to tidy things up later. I’m trying to do as many of these speakers as possible. I might miss a few.

Joi wanted to start first to frame the discussion. New media is fundamentally different than old media. Media is about access, and the business model defines the media. Looking at newspapers and satellite TV, it costs a lot of money. The big difference with new media is that it has significantly lowered the cost to create media and to connect. It’s fundamentally different than the past. To understand how it’s different and why it’s different. The architecture of the internet is open.

Before the internet, governments, corporations and experts create specifications. They costs millions of dollars. They are robust and they sell products an services to consumers and they pay fees to services. Telecommunications companies are still big business in the Arab world, even for governments.

In the internet, you have users, venture capitalists, standard organisations and a credo “Rough consensus running code”. It evolves over time. Internet standards are lighter weight than in the past.

The internet “open stack” consists of the internet protocol. The proprietary standards for networking gave way to the internet protocol. The standards for IP are shepherded by the IETF. Anyone can participate. It’s a very open system. The World Wide Web is another standard shepherded by the IETF. W3C is the standards body. It’s an ad hoc committee without specific government standard. Governments are uncomfortable that there is no government involvement in the web standard.

Creative Commons looks at the copyright layer. The copyright system used to make sense.

We are trying to create an open stack for the legal layer.

The other section is open source software. Open source and free access to the university network. Google ran a web server, probably Apache, and they accessed Stanford’s network. A couple of students built Google thanks to open source software. It existed before the internet, but the internet allowed people to connect with each other to build this software.

He next highlighted open video. YouTube and other sites, most of them use Flash. It’s proprietary. You can’t participate in this video internet stuff without permission. In HTML5, we were working very hard on video initiatives for open video. Google acquired a company that had a video technology called VP8. This is going to be the core video technology in an open video format called WebM. This is going to be a significant change in the video structure. (It looks like VP8 is being challenged by MPEG-LA, the vide licencing body for the MPEG standard.)

Giving things away for free doesn’t seem like a great business model. However, Creative Commons give users a choice in how they want they want their work used. He quickly walked through the different types of Creative Commons licence. Free is not just about not making money. Nine Inch Nails released a CD called Ghost. They gave their music away for free. They used an attribution Non-commercial share alike licence. They created their own site instead of selling it through a company. It’s about taking more money from fewer people. They made $6m. (I need to check that figure.) UPDATE: I checked that figure after the talk, and Joi said NiN made $1.6m in a week.

Al Jazeera has released content under Creative Commons. Until last year, it didn’t use the Creative Commons licence. It used a Free Software Foundation licence for creating computer manuals.

Joi said that he’s worried about licence proliferation. He talked about different organisations creating ‘vanity licencing’ schemes. The White House now uses a Creative Commons licence.

Poynter asks: Are journalists giving up on newspapers?

The Poynter Institute in the US hosted an online discussion asking if journalists are giving up on newspapers after high-profile departures there including Jennifer 8. Lee, who accepted a buy out at the New York Times, and Anthony Moor, who left newspapers to become a local editor for Yahoo. Moor told the US newspaper trade magazine Editor & Publisher – which just announced it is ceasing publication after 125 years:

Part of this is recognition that newspapers have limited resources, they are saddled with legitimate legacy businesses that they have to focus on first. I am a digital guy and the digital world is evolving rapidly. I don’t want to have to wait for the traditional news industry to catch up.

This frustration has been there for a while with digital journalists, but many chose to stay with newspapers or sites tied to other legacy media because of resources, industry reputation and better job security. However, with the newspaper industry in turmoil, now the benefits of staying are less obvious.

Jim Brady, who was the executive editor of WashingtonPost.com but is now heading up a local project in Washington DC for Allbritton Communications, said on Twitter:

A few years ago, the risk of leaping from a newspaper to a digital startup was huge. Now, the risk of staying at a newspaper is also huge.

Aside from risk, Jim echoed Moor’s comments in an interview with paidContent:

Being on the digital side is where my heart is. Secondly, I think doing something that was not associated with a legacy product was important.

In speaking with other long-time digital journalists, I hear this comment frequently. Many are yearning to see what is possible in terms of digital journalism without having to think of a legacy product – radio, TV or print. There is also the sense from some digital journalists that when print and digital newsrooms merged that it was the digital journalists and editors who lost out. In a special report on the integration of print and online newsrooms for Editor & Publisher, Joe Strupp writes:

Yet the convergence is happening. And as newsrooms combine online and print operations into single entities, power struggles are brewing among many in charge. More and more as these unifications occur, it’s the online side that’s losing authority.

It’s naive to think that these power struggles won’t happen, but they are a distraction that the industry can ill afford during this recession. In the Editor & Publisher report, Kinsey Wilson, former executive editor of USA Today and editor of its Web site from 2000-2005, said that during the convergence at USA Today and the New York Times:

We both had a period of a year or two when our capacity to innovate on the Web stopped, or was even set back a bit

Digital models are emerging that are successful. Most are focused and lean such as paidContent (although it has cut back during the recession, I’d consider its acquisition by The Guardian, my employer, as a mark of success) and expanding US political site Talking Points Memo. There are opportunities in the US for journalists who want to focus on the internet as their platform.

Back to the Poynter discussion, Kelly McBride of Poynter said during the live discussion:

I talk to a lot of journalists around the country. I don’t think they are giving up journalism at all. I do think some of them have been let down by newspapers. But a lot are holding out. They are committed to staying in newspapers as long as they can, because they are doing good work.

It’s well worth reading through the discussion. I am sure that many journalists have some of the same questions.

What was the verdict? Poynter discussion - Are journalists giving up on newspapers?

AP’s Curley v Curley and News Corp’s Rupert v Rupert

The newspaper industry has woken from its slumber, and they have realised the enemy is not the internet. The enemy is actually you and me, those of us who use the internet. According to the CEO of the Associated Press Tom Curley, “third parties are exploiting AP content without input and permission”, and:

Crowd-sourcing Web services such as Wikipedia, YouTube and Facebook have become preferred customer destinations for breaking news, displacing Web sites of traditional news publishers.

I’m linking to this on one of these third parties sites, Google News, which has a commercial hosting agreement with the AP. Those bloody paying parasites!

Curley was speaking at the World Media Summit in Beijing’s Great Hall of the People. Does Curley know who added those links to Wikipedia, shared those stories on Facebook or uploaded those videos to YouTube? Internet users, you, me and millions of others around the world. For Mr Curley, the internet is a “den of thieves“, says Jeff Jarvis.

Jeff offers his argument against this view of the world. However, I’d like to stage another bit of a debate, one possible through the virtual time travel of the internet. Let’s get ready to rumble! In this corner, we have the Curley of 2009, who argues:

We content creators must quickly and decisively act to take back control of our content.

With that jab, a slightly younger, slightly more optimistic Curley of 2004 lands a right hook: “The future of news is online, and traditional media outlets must learn to tailor their products for consumers who demand instant, personalized information.” The Curley of 2004 instead sees this future from his own past:

the content comes to you; you don’t have to come to the content so, get ready for everything to be ‘Googled,’ ‘deep-linked’ or ‘Tivo-ized’.

Ouch Tom 2009, that looks like it hurts. Next up in our virtual cage match is a spry 78-year-old, Rupert Murdoch! Let’s start with the Rupert of 2009:

The aggregators and plagiarists will soon have to pay a price for the co-opting of our content. But if we do not take advantage of the current movement toward paid content, it will be the content creators — the people in this hall — who will pay the ultimate price and the content kleptomaniacs who triumph.

Fighting back is the fighting fit Rupert “The Digital Immigrant” Murdoch of 2005:

Scarcely a day goes by without some claim that new technologies are fast writing newsprint’s obituary. Yet, as an industry, many of us have been remarkably, unaccountably complacent. Certainly, I didn’t do as much as I should have after all the excitement of the late 1990’s. I suspect many of you in this room did the same, quietly hoping that this thing called the digital revolution would just limp along.

It’s a shame to see this come to blows. These guys should really talk to each other. With Rupert 2009 on the ropes, Rupert 2005 delivers this shot:

What is happening is, in short, a revolution in the way young people are accessing news. They don’t want to rely on the morning paper for their up-to-date information. They don’t want to rely on a god-like figure from above to tell them what’s important. And to carry the religion analogy a bit further, they certainly don’t want news presented as gospel.

Instead, they want their news on demand, when it works for them.

They want control over their media, instead of being controlled by it.

Ouch. Can’t you guys make up your mind? Has the Great Recession changed consumer internet behaviour and media consumption trends? Or did the industry’s complacency finally catch up with it?

How to spot a web hoax

Every journalist learns (or should learn) how to evaluate sources and, as the web increasingly becomes a source for stories, we need to know whether the things that we stumble across there would make a good source. The internet has been an important part of my job as a journalist almost since I took my first full-time journalism job in 1994. Internet journalists have their own investigative skills, skills that will have to become more widespread as the internet becomes part of every journalist’s job.

I mention this in the wake of a hoax last week by Alex Hilton, aka the British political blogger Recess Monkey. For background, I’ll refer to the Guardian, my day job:

Been following the ding-dong over Tory MP Chris Grayling comparing parts of “Broken Britain” to Baltimore, the crime-ridden city shown in The Wire? What about the riposte from Baltimore mayor Sheila Dixon, who has hit back stating that comparing The Wire to the real Baltimore was “as pointless as boasting that Baltimore has a per capita homicide rate a fraction of that in the popular UK television show Midsomer Murders.” If only.

The British press, and to be fair the hometown Baltimore Sun, reported that the mayor of Baltimore rose to the defence of her fair city. The only problem is that she didn’t. The Guardian was one of the British newspapers that fell for the hoax.

I have to take my hat off to Alex. I’ve never had the pleasure of meeting him, but the hoax was very well executed. Alex created a Twitter account to promote the site. He created a YouTube channel and a video.

To create the site, he simply copied the underlying code from a web page from the official Mayor of Baltimore’s site and changed some of the content. This retained all of the links to the official site and the images were simply pulled from the official server. It’s an immaculate hack. He didn’t have to break into anyone’s server, just copy a web page and add his own content.

It’s a similar trick to phishing scams. It looks like Amazon.com or your bank, but you’ve actually just been sent to some mobsters’ site in Russia or a naked IP address. (I don’t mean to disparage the good people of Russia, but a helluva lot of phishing scams are .ru.) You have to be pay attention to the web address to notice that you’ve suddenly been teleported somewhere else on the web.

Alex explains his motivations and the clues that he left to tip anyone off that it was a joke:

So what else did I do to make sure this wasn’t seen as the true views of the Mayor of Baltimore or an attempt to deceive anyone or to smear Chris Grayling? I registered the mayorofbaltimore.org domain in my own name. I squirrelled in the English spelling of “dishonoured” as a clue. I put at the bottom of the page, “Copyright R Monkee Esq” and linked it to my currently decrepit Recess Monkey website. I put the following message on Recess Monkey for anyone who cared to follow the link:

Sorry, RecessMonkey is on holiday in Maryland. Right mouse button click view source (but not on this website) R Monkee Esq.

If you had looked at the source on the site, you would have found this:

OK, so I’m just having a bit of fun at Chris Grayling’s expense. Sitting in the office on a hot August afternoon, I was fantasising that I was Mayor of Baltimore and how annoyed I would be. I hope you very quickly picked up that this was a spoof. Didn’t mean to break any laws or ethical mores – please don’t extradite me if I have unwittingly done so. Hope you appreciate the humour, Alex Hilton, alexhilton@gmail.com – 07985 384 859

Alex expected journalists to spot the humour and the hoax, but it was reported as fact. He rang the Guardian switchboard and was put through to me. He was trying to ratchet down the media furore. The Guardian media desk wrote a brief story about the hoax, and I alerted the news desk so the correction process could begin.

So, what gives a hoax like this away? Here’s your three point guide to spotting when someone’s pulling your leg.

  1. Pay attention to the URL
    When a colleague sent me the website address, I spotted the hoax immediately. I didn’t even need to see the site to know it was a fake. How? The URL was mayorofbaltimore.org, not .gov or .md.us. In a UK context, that’s similar to a fake Downing Street site with the address www.number10.org.uk instead of www.number10.gov.uk. Org addresses are for non-profits not for governmental websites.
  2. Who owns the site?
    Finding out who owns a site is easy. Do a WHOIS lookup, and you’ll find out not only who owns the site, but sometimes even their contact details. Most of the time this is corporate information that won’t give you a person to ring, but as Alex says, he registered the site in his own name.
  3. Hover over the links
    Why? If you hover over the links, you would have seen they went to a different address, not mayorofbaltimore.org. It could save you from a phishing scam and it could have prevented journalists from falling for this hoax.
  4. Be wary of a Twitter account with only one update
    Alex created a fake Twitter account, and the only update linked to his fake press release. That’s a big warning sign to me.

Alex also hid a huge clue in the source code, including his mobile phone number and his e-mail address. To see the underlying code of a web page in Firefox, go to the View menu and scroll down to Source. For Internet Explorer, go to Page on the menu bar and scroll down to Page Source. UPDATE: A commenter on Twitter admitted she probably wouldn’t have thought to look at the source code of the page. I looked at the code because I wanted to see how Alex had cloned the page. When things look fishy, the code can reveal a lot. Alex also made it clear on Recess Monkey that there was an Easter Egg hidden there as well.

Much of what I have described were once specialist skills that only web geeks needed to know, but as the web becomes more of every journalists’ job, having these relatively simple skills might be the thing that prevents you from falling for the next hoax. These are not technical skills anymore. They are skills you need to evaluate a source on the web, conduct an investigation and protect your credibility as a journalist.

Raising journalists’ expectations only to crush them

My colleague and compadre Jemima Kiss flagged up story making the rounds on journalism blogs that the University of Missouri School of Journalism is requiring new students to have an iPhone or an iPod touch.

Like Jemima, I speak to quite a few journalism classes as well. While everyone assumes that young people almost without exception embrace technology, it couldn’t be further from the truth. As Jemima writes:

Chatting to journalism students is always an eye-opener, because, despite the enthusiasm and the clear commitment to their career, there’s very often a rather romantic view of an industry that doesn’t really exist any more. It’s a world of smokey bars and clattering Fleet Street typewriters battling against a daily deadline, or, very often, a rather glamorous late night gig review by a wannabe music journo.

Sadly journalism students’ romantic notion of journalism is often 30 years out of step, and they are often even more resistant of new technology and new methods than those working in the industry.

I stopped off at the University of Missouri to visit my friend Clyde Bentley when I was traveling across the US last year for the elections, and it was great to see them thinking not just about the internet but also actively exploring mobile technology. The University of Missouri is a great institution, and it’s great to see them keeping ahead of the times. But the move to require an iPod Touch or an iPhone has not been welcomed by all.

Levi Sumagaysay at the San Jose Mercury News asked if the requirement was a conflict of interest. He questioned “what appears to be the school’s bias or endorsement of the aforementioned Apple products”.

However, I noticed something else that Levi wrote about, building up journalism students expectations. He writes:

An ironic side note: In most newsrooms I’ve worked, we’ve had to claw our way to “preferred equipment,” and we considered ourselves lucky if, in 1999, our work computers got upgraded to, say, Windows 95. If newspapers survive, future journalists being trained to work on the latest and greatest equipment are in for a huge letdown when they realize that that stuff is largely non-existent in the newsroom — we just write about them.

It’s actually more than moving journalism students from a world of shiny Apple engineering to a world of outdated, coffee-encrusted computers. It’s moving them from a world where they can install and run what they want to a world of locked-down, corporate machines.

I was talking to a friend this week who told me that she had to get a permission slip signed to get a piece of software installed on her work computer and another permission slip signed to actually use the piece of software. You would think she was a seven-year-old going on a field trip to an active volcano. When I was with the BBC, I traveled with two computers. My work computer, which I had to have to access certain work systems, and the computer that I actually got work done on.

I know that there are security issues. I know that IT administrators can tell stories of the senior manager’s kid downloading a virus via some Flash game and taking down the network. But a one-size fits all corporate IT policy is not only a soul-destroying experience for a technically proficient journalist, it’s also a productivity killer. There has to be a better way than this. Train staff in the basics of computer security. Allow them to try new things on a virtual machine that can be wiped if it gets infected with a virus. But we can’t expect journalists to explore and learn about digital tools if we lock all the doors ahead of them.

, , ,