The Olympic medal for media innovation goes to…

New York Times Fine Line Simone Biles

A version of this post first appeared on The Media Briefing, where I write about the media developments in North America, especially as they pertain to the search for new media business models. 

The Olympics are over, and the medals have all been handed out. But for me, the Games are not just an opportunity to see the best athletes in the world but also to see some of the most cutting edge digital media innovation. The 2016 Rio games also showed some of the tectonic shifts in media with viewership dipping on traditional TV platforms and up on on-demand and mobile platforms.

These are not simply vanity projects. As we saw recently with Politico’s Apple Wallet-powered EU Tracker project in the lead-up to the Brexit vote, a smart strategy executed well during major events can help you reach new audiences and power your growth to the next level.

Not to mention, that just like gold medal athletes hoping for lucrative endorsement deals after the games, media organisations are hoping to cash in, and this Olympics also showed how organisations are seeking new sources of revenue through digital commercial innovation.

New York Times’ The Fine Line

The Olympics are one of those big set piece events when top news groups, start-ups and the digital platform giants have time to plan and create trail-breaking digital media experiences.

Amongst the legacy media groups, the New York Times has once again made as much of a splash with digital media watchers as Michael Phelps and Katie Ledecky have made in the pool.

One of the most talked about and ground-breaking Olympics features by the Times were a series of visually-led features called, The Fine Line. In addition to the Fine Line features, the Times also created incredibly simple but effective animations to show how the swimming races played out, for instance how teen phenom Katie Ledecky dominated in the pool.

New York Times Olympics Bodies Rio Olympics 2016 featureBut that wasn’t all the Times did. Another feature effectively gave a game-like feel to the content with a visual quiz in which the audience was asked to guess what sport the athlete or para-athlete was involved in by their body characteristics. Did they have muscular legs and or arms? Were they tall or short and powerful? It was really nicely done, and the Times made a point to say that the athletes and para-athletes wore as many or as few clothes as they felt comfortable with.

Commercial innovation to drive digital revenue growth

But, as we’ve seen so often in 2016, the best editorial innovation isn’t enough to guarantee a sustainable business. Fortunately, the New York Times also displayed some incredible commercial innovation as well.

In the middle of the Fine Line features is a native advertising feature for Infiniti’s Q60 that seems right at home in the format. In addition to flowing the Infiniti ad into the middle of the stories, it is peppered throughout them, appearing both in the navigation and on the front of every Fine Line segment. The ad even fits thematically with the content: The “Making an Ironman” native advertising video shows a man training for the triathlon world championships with product placement of the Infiniti Q60.

Infiniti’s content also appears in various New York Times’ social channels, including Youtube and the NYTVR app.

VR, mobile, programmatic and native advertising are all part of the New York Times’ strategy to dramatically increase non-display digital ad revenue because display has shown lingering softness for many legacy print publishers in the face of the dominance of Google and Facebook.

The New York Times has not been immune, and it reported in its most recent quarterly results that digital ad revenue dropped 6.8 percent, which looks bad but not when compared with the 14.1 percent swoon in print adrevenue.

The Infiniti native advertising package across multiple digital channels looks like the kind of bigger deal that New York Times CEO Mark Thompson talked about recently when he predicted dramatic digital ad growth in the third quarter.

Thompson and Chief Revenue Officer Meredith Kopit Levien told Ad Age that these bigger, multifaceted packages were taking longer to close, slowing the pace of ad deals in the short term, but dramatically increasing revenue in the longer term.

Thompson said that these bigger deals were in the “million-plus range”, and they both said that the revenue would start to be reflected in the NYT’s second half results. It gave Thompson the confidence to predict that the NYT would deliver double-digit growth in digital ad revenue in the third quarter.

Power to the platforms

Rio Olympics media innovation

In its recent results, The New York Times pointed out that mobile was powering a lot of their growth, and Thompson said mobile is “growing at rates that even Mr. Zuckerberg’s little firm would recognise”.

Mobile content took centre stage at Rio 2016, and Facebook and other major  digital platforms were seen as key to helping Olympic broadcaster NBC to make sure that its content reaches younger, more mobile audiences.

Before the games, NBC’s deal with Buzzfeed and mobile messaging darling Snapchat grabbed a lot of coverage. Buzzfeed is curating content from Snapchat, and Snaps from Rio appear prominently in its Discover section. Buzzfeed’s involvement makes sense in light of NBCUniversal’s $200 m investment in the company.

This kind of distribution is officially a very big deal as it was was the first time that Olympics content would appear on a non-NBC platform, according to Gerry Smith of Bloomberg News. More than that, NBC isn’t requiring Snapchat to pay anything for the privilege, but the broadcaster, which paid $1.23 B for the broadcast rights, negotiated an ad revenue share with the mobile messaging and content platform.

Facebook’s ambitions in Rio were much more global, and it struck a deal with the IOC and 20 official Olympics broadcasters to offer content on Facebook Live and recap content on both Facebook and Instagram, according to L&F Capital Management on the investment blog Seeking Alpha. Facebook also reportedly paid some athletes, including Michael Phelps, to provide exclusive live interviews.

Looking to make live events and sports a bigger part of its offering, Twitter announced content across Moments, Vine and Periscope in its coverage before the games. Twitter also announced a pivot in the Moments product as well, as it said that Olympic Moments would stick around in users’ timelines for weeks rather than days.

When I wrote the piece for the Media Briefing, we really didn’t have a full picture of viewership on traditional linear TV and also how audiences were turning to consuming video on mobile platforms. But we quickly got a sense, and for NBC, it wasn’t entirely good.

Bloomberg noted that ratings were down 17 percent overall in primetime and down by 25 percent in the 18-49 demographic. Gerry Smith of Bloomberg questioned whether NBC Universal had got its money’s worth in terms of their $12 bn investment in the Olympics. Smith went on to say:

The Summer Olympics ratings slip, the first since 2000, raises fresh doubts about what used to be a sure thing: live sports would be a huge and growing draw no matter what.

But while traditional TV viewership was down, online viewership was up by 25 percent. Regardless of the obvious switch from linear TV to on-demand formats, NBC still ended up having to give away some air time to advertisers to make up for the viewership shortfall on traditional TV.

Of course, if you want a stinging rebuttal of Bloomberg’s thesis, read this Medium post on how terrible the NBC streaming experience was by Brenton Henry. The real issue for Henry seemed was that the streaming options were really only available for cable subscribers.

I was tempted to shorten this article, but then the lengths of measure I had to take to view something that is available for free over the airwaves show there is clearly a problem. I’m sure NBC were patting themselves on the backs for how easy it would be to watch online this year, but that’s only true for cable subscribers, a slowly shrinking percentage of the US population, especially for Millennials.

As we’ve seen with ESPN’s woes, pay TV use is starting to decline as more people rebel against the ever rising costs of a bundle of channels and services they simply don’t want. The business model for paid TV is going to come under increasing pressure. The Olympics and NBC’s model only highlights that.

Building News Apps on a Shoestring – Learning – Source: An OpenNews project

See on Scoop.itInteractive journalism design

Source – Journalism Code, Context & Community (RT @GerardBest: How to build data-driven web apps with a shoestring budget http://t.co/q7I5tXVjl4 #dataviz #journalism @bulletpoints_)…

Kevin Anderson‘s insight:

Practical advice on how to build news apps without a big budget and with limited staff. Any small newsrooms that wants to create apps should keep this great resource handy.

See on source.opennews.org

Listen to the dawn of data journalism: Univac, the first Nate Silver

In one of my data journalism presentations I look back at the history of data journalism, and one of the key dates is 4 November 1952: the first US election when a TV network, CBS, used a computer to analyse and predict the election results.

CBS used the room-sized, vacuum-tube powered Univac.  The idea came from Remington Rand, the makers of the Univac, because sales weren’t as strong as they wanted. In a creative bit of marketing, they approached CBS to use the computer to help analyse the election results. Of course, CBS also saw a marketing opportunity and mentioned Univac in their election coverage ads. Last night, in a lovely bit of luck, I heard one of those ads.

I often listen to old radio dramas while I’m making dinner and, much to my surprise, last night when I was listening to the 13 October 1952 episode of the western radio drama Gunsmoke on Old Valve Radio, I heard CBS run an advert touting how “Univac, the electric brain” would be assisting Edward Murrow and his team on election night coverage the following week. My jaw just dropped to hear this message announcing the dawn of computer-assisted reporting, as it was called in the US, 70 years before the term data journalism came into vogue. You can hear the ad yourself on Podbay.fm, and, if old western radio drama isn’t your cup of tea, fast forward to 13:43 to hear about how the CBS election team would be backed up by “Univac, the electric brain”.

For a lot of data journalists, the CBS-Univac partnership is a famous and well known bit of history, but if you aren’t familiar with the rest of the story, it is fascinating. Both Remington Rand, makers of the Univac, and CBS saw value in the arrangement, as Wired explained when they looked back at the day in 2010.

The eight-ton, walk-in computer was the size of a one-car garage and accessed by hinged metal doors. Univacs cost about $1 million apiece, the equivalent of more than $8 million in today’s money. The computer had thousands of vacuum tubes, which processed a then-astounding 10,000 operations per second. (Today’s top supercomputers are measured in petaflops, or quadrillions of operations per second.)

Remington Rand (now Unisys) approached CBS News in the summer of 1952 with the idea of using Univac to project the election returns. News chief Sig Mickelson and anchor Walter Cronkite were skeptical, but thought it might speed up the analysis somewhat and at least be entertaining to use an “electronic brain.”

They had no idea how quickly it would speed up analysis, and early on the evening with only about 5 percent, or 3.3 m of the total 61 m votes, counted Univac had a prediction. The Computer History Museum has the printout of a prediction the Univac team sent to CBS via teletype at 830 pm. “It’s awfully early, but I’ll go out on a limb.”

However, just as traditional political pundits heaped scorn on stats wizard Nate Silver in 2012, CBS’s editors were not willing to join the Univac team out on that limb. On air, CBS hedged:

In another story about the election by Ars Technica, we learn that the Univac team figured out that they had entered the New York results incorrectly, overstating Stevenson’s votes by a factor or 10. They ran the numbers again, but they got the same result. Univac predicted that Republican Dwight Eisenhower would win the Electoral College 438 to Stevenson’s 93 votes, and the computer set the odds of an Eisenhower win at 100-1.

As the night went on, Eisenhower gained momentum. The final vote was 442 to 89 to Eisenhower, and Univac’s early prediction was off by only one percent.

While I’ve known about this story for a while, it was great to hear the advert of CBS advertising how it was going to use an “electric brain”. I also learned something new about Univac. It was programmed by computer pioneer Grace Hopper. Her team fed the computer with statistics from previous elections, and she actually wrote the code that allowed Univac to make its prediction. Sadly, her contribution was not mentioned in reports at the time.

Visualisations aren’t the only end result of data journalism

My friend and former colleague Simon Rogers, editor of the Guardian’s Data blog, has posted a defence of the increasing use of data visualisation. I agree to a point, but I also think it’s really important to remember that visualisations are not the only product of data analysis. They can help readers see patterns in complex sets of data, but I also think that sometimes we’re missing other opportunities with data analysis by focusing on data visualisation. Sometimes, the result isn’t a visualisation but a key insight that underpins a story. I often worry about the problem of seeing a world as full of nails when you think all you have is a hammer. Sometimes, visualisations are just not the right end product of data journalism.

I’ve heard statisticians grumble about information being seen as simply beautiful instead of being, well, informational. Good data visualisations hit a middle way being being beautiful and simplifying complex concepts. I’ve heard designers grumble about data visualisations that aren’t beautiful, and they rail away against the lack of aesthetic of some of the publicly available tools. Sometimes people are using the wrong chart or visualisation to visualise their data. When it comes to charts, I often show this simple chart during training, which really breaks down what types of charts or visualisations are appropriate for what kind of data you’re working with.

I am always in favour of the democratisation of tools, but when it comes to digital story-telling, editors need to remember all of the techniques available and have a clear way of deciding which technique is appropriate.

Digital Directions 11: Josh Hatch of Sunlight Foundation

Josh Hatch, until recently an interactive director at USAToday.com and now with the Sunlight Foundation, talked about how the organisation loves data. The transparency organisation uses data to show context and relationships. He highlighted how much money Google gave to candidates. Sunlight’s Influence Explorer showed that contributions from the organisation’s employees, their family members, and its political action committee went overwhelmingly to Barack Obama.

Sunlight Foundation Influence Explorer Google

The Influence Explorer is also part of another tool that Sunlight has, Poligraft. It is an astoundingly interesting tool in terms of surfacing information about political contributions in the US. You can enter the URL of any political story or just the text, and Poligraft will analyse the story and show you the donors for every member of Congress mentioned in the story. They will highlight details about the donors, donations from organisations and US government agencies. It’s an amazingly powerful application, and I think that it points the way to easily add context to stories. It does rely on the gigabytes of data that the US government publishes, but it’s a great argument for government data publishing and a demonstration for how to use that data. Poligraft is powerful and it scales well.

Josh showed a huge range of data visualisations, and he’ll post the presentation online. I’ll link to it once he’s done.

#ONA10: Real-time, mobile coverage

My road trip kit

Tomorrow I fly to Washington ahead of the Online News Association conference. I’ll be doing a pre-conference session next Thursday on real-time coverage with Kathryn Corrick, digital media consultant and ONA UK Chair, Gary Symons of VeriCorder Technology. Kathryn is going to focus on desktop-based real-time coverage. There is a lot that is possible from the newsroom, and often when you’ve got a lot of journalists in the field, you need someone back at base to help collate and curate all the content. Gary is going to focus on multimedia, especially some of the tools that Vericoder offers. I’m going to focus on a wide range of mobile tools and techniques highlighting some of the examples of what news organisations and innovative journalists are doing.

Two years ago, I was traveling across the US on my way to Washington covering the 2008 elections. It was my third presidential election. I covered the 2000 and 2004 elections for the BBC. Every election, the mobile technology got a little more sophisticated and a lot more portable.

In the 2000 election, Tom Carver and I traveled across the US in six days answering questions from the BBC’s international audience. We used portable satellite technology, a mini-DV camera and webcasting kit to do live and as-live webcasts. The satellite gear was similar to what would become standard for live video feeds from Afghanistan. We used it in much less threatening locales such as a bar in Miami to talk to college students about apathy amongst youth. The gear weighed about 70 pounds, and it was a bit temperamental. I had to buy a toolkit in Texas and perform emergency surgery in a Home Depot parking lot. That definitely wasn’t in the job description when I was hired, but we got the job done.

In 2004, everything had changed. I used an early data modem to file from the field. The BBC content management didn’t quite work in the field, but we could at least send text and images. Richard Greene and I worked to engage our audiences, again fielding their questions and bringing them along on our journey. I blogged through election day, and that blogging experiment would send my career in a radically new direction.

It would be 2008 when I finally realised my dream of being able to work almost constantly on the move publishing via Twitter, Flickr, Facebook and the Guardian blogs via a laptop and mobile modem and a state-of-the-art multimedia mobile phone, the Nokia N82 . The picture above shows my road trip kit. It did more with much with so much less weight than the gear I lugged around in 2000. I could fit it all easily in a backpack. I had my laptop, a data modem, a power inverter, a Nikon D70, a geo-tagger and my Nokia. I geo-tagged all of my pictures, posts and most of my tweets. Before anyone knew what Foursquare or location-based networks were, I saw an opportunity to geo-tag content to map it and eventually deliver relevant content to where people are. I have a detailed explanation of how I did it.

The trip was the realisation of a journalistic dream; I could report live while staying in the middle of the story. I could use my phone to tweet and upload pictures from the celebrations on the streets of Washington. This was two years ago. The technology has moved on, and now it’s easier and the the video, images and audio are better. It’s now easy to broadcast live video with nothing more than a mobile phone.

We’ll cover the latest developments and then go out on the streets of Washington just days before Americans go back to the polls in this critical midterm election. There are a still a few slots left so if you’re coming, come join us from 2-5 Thursday 28 October.

Obama celebrations Washington DC

Social Media Forum: My thoughts on the future of context

Next week, I’ll be giving the keynote at the Social Media in Hamburg, and I’ve been asked to speak about the future of context. Bjoern Negelmann asked me a few questions via email about the subject, and he’s kindly allowed me to cross-post the interview for the Social Web World blog.

1) Kevin, as an expert for new digital media strategies you will be giving a talk on the “future of context” at the upcoming Social Media FORUM on Sept 28. Can you give three keywords that describe what we can expect from your talk?

Relevance, insight, value

2) Is “context” the turning key for the misled strategies of media companies in the Internet? And if so what is the explanation?

First I should say, as much as everyone in the industry wishes it, there are no silver bullets, no single solution that will solve the problems that media companies are facing. The iPad won’t save us. Paywalls won’t save us, and simply finding ways to increase context won’t on its own save us.

That being said, most current digital media strategies are fundamentally flawed. They are mostly based on the premise that internet really is just another distribution medium like radio, television and print. They rely on a media landscape of scarcity instead of abundance. These outdated assumptions are rooted in the era of mass media. In 20th Century mass media models, which relied on just a few sources of information and entertainment, success relied upon building the biggest audience possible and using paid content and advertising to make loads of money.

As Edward Roussel of the Telegraph, said, the link between rising audience and higher returns was true until the spring of 2008. Then something happened. Yes, it was partly due to the recession, but it is also due to an oversupply of online advertising space. As Paid Content says, premium and mid-tier publishers are creating too much content, creating a surplus of content to run ads against. As in any market, if supply outstrips demand then you have downward price pressures.

There are exceptions. With the online advertising recovery, The Daily Mail in the UK has been able to outgrow the competition and translate that into commercial success. Big still sometimes wins. There are still lucrative verticals such as business in which returns have stood up or actually grown during the recession. The Wall Street Journal, The Economist and The Financial Times are all enjoying success, partly due to increasing interest in business and finance due to the recession. However, most other publishers find themselves under severe pressure.

To change our fortunes, we first need to question the assumptions underlying 20th Century media business models. Until the 1980s, both audiences and advertisers had fewer choices and media owners could charge monopoly rents for advertising. But when the multi-channel world, whether broadcast or online, arrived, the media’s first reaction was to create more channels and content to try to take advantage of increased distribution opportunities. We’re now seeing the limits of such an approach as the law of diminishing returns takes hold.

Context is about adding value to content in ways that benefit audiences and advertisers. It makes it easier for audiences to find and make sense of relevant content. Adding context, rather than simply creating more content, is about realising that content is no longer scarce, but audiences’ time and attention is. It helps advertisers by providing opportunities for more highly targeted advertising.

3) But this strategy means allocating resources for producing context? Isn’t this against the recent strategies of media companies that are just cutting costs because of the “lousy pennies” of online advertising?

While media companies, especially newspapers, have been cutting staff to cut costs, they have also been creating more content. Digital production techniques make this possible but, again, we’re starting to reach the limits of that strategy. Basically, we have an oversupply of content driving an oversupply of digital advertising space, and traditional markets have one way of valuing a surplus: returns plummet.

The market is already flooded and the last thing we need is more content. A study commissioned by the Associated Press  (PDF) found that young audiences were shutting off because they were lost in a deluge of episodic updates. The key conclusion was: “The subjects were overloaded with facts and updates and were having trouble moving more deeply into the background and resolution of news stories.” In essence, the news industry is acting against its own economic interest by producing more content and exacerbating the problem of information overload. It’s like trying to save a drowning man by giving him a glass of water

We need a much more focused approach. Allocating resources to producing context around existing content while making strategic choices about what not to produce will create opportunities by adding value and creating differentiated products. Yes, we live in a world of flow, with constant streaming updates, but mining that flow for context and value-added information will be where sustainable business models are.

4) So putting the weight on the “context” – what are the formats and examples of this strategy?

Thomson-Reuters has a service called Calais. It analyses thousands of mainstream media and non-traditional sources of information every day. It powers services such as Zemanta, which allows bloggers and traditional journalists to  easily add images and links, which add context, to articles. As a platform, Thomson-Reuters can sell Calais to enterprises to make sense of the data and information they create, but it’s also a tool the company itself uses to algorithmically find meaning in the flow of information from traditional and non-traditional news organisations, e.g. finding new companies to watch before they show up on the traditional news radar.

One of my favourite examples right now is Sunlight Foundation’s Poligraft. Using public information about political contributions and a service like Calais, they reveal details about donors and major campaign contributions to members of Congress. It quickly adds a layer of context in any story involving political leaders.

The Guardian is achieving some great things with their Datablog and Datastore. Data is a key part of many stories that journalists write everyday, but in the past, the only thing we with did with those numbers was highlight a few. Now, the Datablog not only allows everyone to see the full set of numbers, but by hosting them on Google Docs for others to download, people with skills in data visualisation are able to present these numbers in new and creative ways. The Guardian has a group on Flickr to allow them to highlight their work.

The BBC also had another great example during the World Cup this year. They called it dynamic semantic publishing, and it took the official FIFA statistics to dynamically create a rich store of information about players, teams and groups. Not only was it a rich presentation of the facts around the World Cup, but it also helped their audience discover BBC coverage of their favourite teams and players.

5) If you take a look ahead in the future – what kind of media companies are able to adapt to that strategy?

The kind of companies that have been able to adapt to this strategy have been ones that see beyond traditional containers of content. For news, they realise that the written story is no longer the atomic unit, the indivisible unit, of journalism. There is data and context within the story, context that can be linked and used to draw connections between seemingly unrelated events in our increasingly complex world. Context is not just about adding value to pieces of content, but it also helps make it easier to organise and add news ways for audiences to find and discover what is relevant and interesting to them.

The value of data for readers and the newsroom

When I was at the BBC, a very smart producer, Gill Parker, approached me about pulling together a massive amount of data and information she was collecting with Frank Gardner trying to unravel the events that lead to the 11 September 2001 attacks in the US. Not only had Gill worked on the BBC’s flagship current affairs programme Newsnight and on ABC’s Nightline in the US, she also had worked in the technology industry. They were interviewing law enforcement and security sources all around the world and collecting masses of information which they all had in Microsoft Word files. She knew that they needed something else to help them connect the dots, and speaking with me in Washington where I was working as BBCNews.com’s Washington correspondent at the time, she asked if help her get some database help.

I thought it was a great idea. My view was that by helping her organise all of the information that they were collecting, the News website could use the resulting database to develop info-graphics and other interactives that would help our audience better understand the complex story. We could help show relationships between all of the main actors in al Qaeda as well as walk people through an interactive timeline of events. I had a vision of displaying the information on a globe. People could move through time and see various events with key actors in the story. This was a bit beyond the technology of the time. Google Earth was still a few years away, and it would have required significant development for some of the visualisations. However, on a story like this, I thought we could justify the effort, and frankly, we didn’t need to go that far. Bottom line: Organising the data would have huge benefits for BBC journalists and also for our audiences.

?Unfortunately, it was the beginning of several years of cuts at the BBC, and the News website was coming under pressure. It was beyond the scope of what I had time to do or could do in my position, and we didn’t have database developers at the website who could be spared, I was told.

A few years later as Google Earth developed, Declan Butler at Nature used data of the spread of the H5N1virus globally to achieve something like the vision I had in terms of showing events over time and distance.

It is great to see my friend and former Guardian colleague Simon Rogers move forward with this thinking of data as a resource both internally to help journalists and also externally to help explain a complex story in his work on the Wikileaks War Logs story. Simon wrote about it on the Guardian Datablog:

we needed to make the data easier to use for our team of investigative reporters: David Leigh, Nick Davies, Declan Walsh, Simon Tisdall, Richard Norton-Taylor. We also wanted to make it simpler to access key information for you, out there in the real world – as clear and open as we could make it.

As the digital research editor at The Guardian, data was key to many of my ideas (before I left this March to pursue my own projects). I even thought that data could become a source of revenue for The Guardian. Data and analysis is something that people are willing to pay for. Ben Ayers, the Head of social media and community at ITV.com, (speaking for himself not ITV) said to me on Twitter:

Brilliant. I’d pay for that stuff. Surely the kind of value that could be, er, charged for. Just sayin’ … just an example of where, if people expect great interpretation of data as part of the package, the Guardian could charge subs

As I replied to Ben, I wouldn’t advocate charging for data for the War Logs, but I would suggest that charging for data about media, business and sports. That could become an important source of income to help subsidise the cost of investigations like the War Logs. Data wrangling can be time intensive. I know from my experience in developing the media job cuts series that I wrote at the end of 2009 for The Guardian. However, the data can be a great resource for journalists writing stories as well as developing interactive graphics like the media job cuts map or the IED attack map for the War Logs story. Data drives traffic, as the Texas Tribune in the US has found, and I believe that certain datasets could be developed into new commercial products for news organisations.

Infoporn Friday

First up, David McCandless’ Billion Pound-O-Gram which very neatly allows us to compare how big various large sums of money are in relationship to each other:

billion pound-o-gram

Then there’s this Periodic Table of Visualization Methods. Mouseover each ‘element’ for an illustration of the method.

And finally, Pedro Cruz’s visualisation of the decline of the world’s four major maritime empires, which is just glorious. (Full-size version on Vimeo.)

Lovely, eh!