“Users will scroll” says Nielsen

Jakob Nielsen, once an opponent of scrolling, has now said that users will scroll, but only if there’s something worth scrolling to. This totally fits in the “No shit, Sherlock” category, but I suppose it’s good to have one’s experiences backed up by the evidence.

What’s disappointing about Nielsen’s column is that he doesn’t appear to have taken different types of content and behaviour into account. So there’s no sign that he adjusted for interestingness of the content, its relevance to the test subject, or whether the site already prioritised key information at the top of the page. Nor does he say whether he adjusted for content that provokes seeking behaviour or what I shall call here ‘absorbed’ behaviour, e.g. reading an interesting blog post.

All three of Nielsen’s examples are sites where I would expect to see seeking behaviour, i.e. the user glances through the content until they find what they want. If the sites are well designed, then the user should find that information quickly, at the top of the page. It is thus not necessarily surprising that he found participants spent 80.3% of their time above the fold (i.e. the point on your screen where you’d need to scroll to see more), and 19.7% below, and that people’s attention flicked down the page until it settled on something interesting.

If Nielsen had used websites that provoke absorbed behaviour, such as well-written blogs or news sites, I would have expected to see a more evenly distributed eye-tracking trace. The third example, a FAQ, is starting to move towards that territory, but FAQs aren’t known for being fascinating. If a blog post or news article is interesting, I will read to the bottom without even realising I am scrolling. If it’s dull, on the other hand, I’ll either give up quite quickly or I’ll skip to the end to see if there’s anything juicy down there, i.e. the low quality of the content flips me from absorbed behaviour to seeking behaviour as I look for something more interesting.

Overall, I find this research, as presented in this column, rather lacking. You can’t just separate out user behaviour from content type and quality because the content has a huge impact on the user’s behaviour.

Nevertheless, Nielsen’s recommendations are sensible, even if they are also somewhat obvious:

The implications are clear: the material that’s the most important for the users’ goals or your business goals should be above the fold. Users do look below the fold, but not nearly as much as they look above the fold.

People will look very far down a page if (a) the layout encourages scanning, and (b) the initially viewable information makes them believe that it will be worth their time to scroll.

Finally, while placing the most important stuff on top, don’t forget to put a nice morsel at the very bottom.

And for those of you who made it this far, here’s your nice morsel (of cute):

Grabbity and Mewton

Google Buzz: A user testing fail?

Yesterday i wrote a long piece about Google Buzz over on my own blog, Strange Attractor. The long and the short of it is that Google has released a Twitter-clone that is embedded in Gmail. Various privacy concerns rapidly came to light which resulted in quite a few concerned blog posts from various quarters.

Once alerted to these concerns, Google immediately addressed some of them, although not everyone is convinced that it has gone far enough to correct the problems. Jessica Dolcourt has written a detailed set of instructions for how to fully disable Buzz for those unhappy with its intrusion into their inbox.

On the one hand, I’m not at all surprised that Google could mess this up so badly. Whilst a brilliant company from an engineering standpoint, it has a history of not really understanding people particularly well. When it does attempt social applications, it tends to do them clumsily.

Most of the stuff that was (and is) wrong with Google Buzz is obvious right from the get-go. A small user test with a handful of people would have picked it up. I wonder if Google did any user testing at all with Buzz, or whether they did it with people who work at Google and therefore, dare I say it, probably don’t think the way we do. Had they reached out to wider user community I think they would have rapidly discovered that Buzz made people feel squicky and that privacy was a serious concern.

We do know that Google does testing. They famously “couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better.” The question is, do they do user testing, and do they do it right? The mistakes made by Google Buzz would indicate that good user testing is not used uniformly across the business.