Annenberg-Oxford Summer Institute: Continuing the Conversation

A couple of years ago, I spoke at the Oxford Internet Institute, and after my talk, the conversation carried on via Strange Attractor and the blogs written by some of the students there. I went back to Oxford today to talk about social media, journalism and broader media trends with the very international group of “scholars and regulators? at the Annenberg-Oxford Summer Institute.

As I did from my talk a few years ago at the OII, I’ll follow up some questions that came after my talk and some questions that came in via Twitter.

Does participatory media make public service media obsolete?

I met Shawn Powers at the Al Jazeera Forum in Doha in May, and he invited me to give a talk at the institute. After my talk, he highlighted what he thought was a contradiction in my presentation, which he thought could be interpreted as supporting James Murdoch’s attack on the BBC. Not to over-simplify his point, but with all of the examples I gave of people creating their own media, Shawn wondered if I was making the point that British society no longer needed a public broadcaster like the BBC.

It never really occurred to me that my presentation could be interpreted like this because four years after I left the BBC, I value public service media even more than when I was working there. Most of the examples I talk about in my presentation (a version is here on SlideShare) are collaborations between professional journalists and members of the public not examples of the public supplanting or replacing journalists.

When I came to London in 2005 to research how BBC News could use blogging, I actually saw the possibility of a public service broadcaster like the BBC deepening its public role by developing stronger relationships with people formerly known as the audience.

James Murdoch’s argument delivered in Edinburgh last year:

We seem to have decided to let independence and plurality wither. To let the BBC throttle the news market, and get bigger to compensate

I see commercial media and public service media combined with emerging participatory media as creating greater plurality, not throttling it. Murdoch’s argument is a rather unsophisticated and transparent attack on the BBC because he knows that most surveys show that when consumers are asked to pay for news online, most of them (74%) would switch to free options, such as the BBC. Only about 5% in the paidcontent.co.uk and Harris survey would pay to continue to use the service. (For a good critique of the Murdochs’ hard paywall that they just erected around The Times and The Sunday Times, see Steve Outing’s look at different commercial strategies.)

Returning to the strategic white paper I wrote for the BBC, I also thought by encouraging media creation by a wider part of the population that it actually would expand civic participation in new ways and possibly reverse trends in the decline in traditional forms of democratic participation such as voting. (Andy Carvin at NPR is demonstrating how social media is public service media can be a powerful combination.)

Maybe in the future, I should start with a statement of principles or values. I assume that my career choices say a lot about my journalistic values. I have worked for two very unique journalism organisations, the publicly-funded BBC and the trust-supported Guardian. It was an honour to work at two places that value journalism as much as the BBC and The Guardian.  I don’t see social media as an argument for ending subsidies to public media in favour of a “pure” market-based media eco-system. Rather, I see my interest in social media as a perfectly logical extension of my passion for the social mission of journalism, a mission to inform and engage people and to empower them as citizens in democratic societies.

Choosing the right tool for the job

Another person at the institute raised the issue of whether I was focusing on the tools rather than the editorial goals. Was I seeing social media as the hammer and every story as a nail?

In reality, I’ve long argued against using a tool for the sake of using a tool. In my original presentation at the BBC, one of my slides was a herd of cattle with a little Photoshopped brand on one of the bulls labelled MSM (mainstream media), complete with the song Rawhide playing in the background. I said that the media was engaging in a lot of herd-like behaviour, rushing off to blog without any clear reason as to why. I used to play a clip of Jon Stewart of the Daily Show sarcastically congratulating MSNBC and their blogging efforts as “giving a voice to the already voiced”. I questioned why the media needed blogs when we already had publishing platforms.

To justify blogging, we had to have clear editorial goals and not just blog because it was the new media flavour of the month. I did see benefits in blogging and using social media. We could engage our audiences directly and take our journalism to where they were instead of relying on them to come our site. We could enhance our journalism by expanding our sources, adding new voices and highlighting expertise in our audience.

Often people saw blogging not as a conversational, engagement focused media but as a means to secure their own column. They didn’t want to write more than once a week. They had no interest in actually responding to comments. Although I didn’t see this as an appropriate use of blogging, usually, they got a blog because I wasn’t in a position to deny them one.

It’s important to understand that social media is only one tool in a journalist’s toolkit. It is powerful, but it is very important to understand when it is appropriate to use social media and when it isn’t.

As someone at Oxford also pointed out, as journalists we need to make sure that we don’t over-interpret opinion on Twitter, Facebook and other social networks as truly representative. I often use social networks and blogging to find expertise and first person experience of an event, not necessarily to canvas for opinion. The same student at Oxford also was concerned that journalists would rely solely on online social networks to source stories or generate story ideas. That’s the mark of either a lazy journalist or one who is so overburdened with work due to staffing cuts that social media becomes an all too easy shortcut. (I understand only too well the time pressures that journalists are under due to the hollowing out of newsrooms.)

Do location-based networks have staying power?

One of the students told me that she had asked a few questions via Twitter while I was talking, and here is one of her questions:

#AnOx10 Kevin Anderson @kevglobal– Social Media for Social Change: great talk today but do u really think Loc-base has staying power?

I’ve been working with location for a couple of years ago, starting with my coverage of the US elections in 2008. I’ve been testing location-based networks like BrightKite and the location features with Twitter since 2008, and I’ve been trying the newer networks such as FourSquare in preparation for a keynote that I’m giving at the SpotOn conference in Helsinki in September.

As I started saying in 2005, in this age of information-overload, two things are key to success: Relationship and relevance. Social media allows news organisations to much more directly build and maintain their relationships with both members of the public who simply want to consume their content and also with people who want to collaborate or contribute to news coverage. In a world with so many information choices, relevance is extremely valuable. This weekend, I spoke to the Gates Scholars at Cambridge, and many of the questions to the panel that I was on were about finding and filtering the vast ocean of information available. To me location is one filter for relevance.

There are two ways to interpret this question: Will the current generation of location-based networks have staying power? Will location itself have staying power?

In using FourSquare, I actually find the game element rather simplistic. Without a native app on my Nokia N82 (am considering buying Gravity, but its £8 is higher than my impulse threshold for buying a mobile app), the friction is too high for me. I am too aware that FourSquare is trying to trick me into surfacing my location. For Google’s Latitude, I set it and forget it, and I see my friends on my Google Map. That service hasn’t hit a critical mass of users in my offline social networks to be all that useful.

However, in convincing people to reveal their location, FourSquare is already beginning to partner with media and other companies to sell other location-based services. Frankly, I don’t need the psychological trickery of points and mayor-ships to get me to check-in, if I get a useful service from revealing my location.

That’s where I see location being interesting, not as an element of games like Gowalla or FourSquare, but as a fundamental enabling technology like RSS. Very few people use RSS directly in standalone readers as I do, but many more people use RSS without even knowing it. Location will be one of those underlying, enabling technologies.

The big difference between RSS and location is the issue of privacy and security connected to revealing one’s location. Lots of people follow me on Twitter who I don’t know. I have a category of contacts on Facebook “People I don’t know”. I am not going to let people I don’t know in the real world know where I am in the real world. I’m working through whether I want to be selective in my contacts on FourSquare or selective in checking in.

Location is going to be a powerful feature in new services. That has staying power. Part of me thinks that services like Gowalla and FourSquare are very first generation at this point. They have a certain Friendster feel about them. However, FourSquare is evolving very quickly, and its very clear business model means that it will have the space to experiment.

Those are the questions that I can think of off the top of my head. If people have more, leave a comment. I’ll try to answer them before Suw and I start our summer break on Thursday.

Enterprise RSS must not die

Marshall Kirkpatrick over at ReadWriteWeb has said that enterprise RSS is dead. Brad Feld, an investor in Newsgator, disagrees and thinks that RSS is alive and well. There’s a spirited discussion in both posts’ comments that’s worth a scan.

I was talking about enterprise RSS only yesterday, and my experience with it has been that it’s nigh on impossible to get RSS readers rolled out in my clients’ companies (except the really small ones, and they tend to go for Google Reader or something else that’s free, not enterprise). Only two clients over the last four years have actually piloted an RSS reader internally.

One client tried Newsgator, but didn’t like it. I wasn’t privy to that conversation, so all I know is that the feature set wasn’t adequate for the money. That was a couple of years ago, so that doesn’t tell us much about the situation now. The other client also tried Newsgator and the jury was still out at the time my engagement finished, but given that their budget was subsequently slashed to £0, I’m guessing that they too didn’t end up with an enterprise-wide installation.

Of the others, we often didn’t get as far as discussions about cost or features, because the response from IT was a flat “No”. There just was no political will within the company to even investigate the possibility, let along start assessing possible tools. I’ve also had reports of companies saying “Yes, we’ll think about it, but a code review might take upwards of a year”, which is so close to “No” that you couldn’t get a piece of paper between them.

So what’s going on? Certainly it’s not that RSS is a difficult concept to explain. I explain it all the time, and whilst it helps to be able to draw diagrams for people, when you say “Instead of you going round to all those websites you check on a daily basis, the content just comes to you” most people understand. And I don’t believe the people who complain about RSS being a three letter acronym either – I just don’t think people are that stupid.

The Web 2.0 evangelists within enterprise that I’ve known have all been really smart people who totally understand the usefulness of RSS, but often they don’t have the political capital to get things done properly. Often they are working with no budget, and have a hard enough time protecting basic tools such as blogs and wikis from senior managers who’d prefer everything to be in SharePoint instead. They don’t necessarily have the heft to get a new tool rolled out company-wide.

Often, RSS readers are seen as a tool that might benefit a minority of people (the evangelists themselves) and the wider uses across the business are either not discussed or not recognised. This gives IT, or other sections of management, the excuse they need to shut down any sort of RSS reader project. Of course, RSS is not just for edge cases, but a useful tool for anyone who has to deal with lots of incoming information, from marketing to competitive intelligence to research to development… the list goes on and on. Yet if it can be characterised as just for a minority, it can be side-lined and binned.

The Catch-22 attitude – if a technology isn’t used by the majority then it’s not going to be rolled out company-wide, meaning that only a minority can ever use it – is endemic in IT these days. I know that’s a comment likely to bring the IT defenders out of the woodwork, but so often I see IT departments whose only mission is to keep the network secure. Obviously that’s important, but IT is also suppose to be about enabling business, and when IT starts to get in the way of important advances in business technology, hard questions should be asked.

But we can’t lay the blame entirely at IT’s door – it’s more complex than that. It’s partly to do with the immaturity of social tools in business, and the propensity for evangelists to fight on alone instead of seeking external expert advice to bring in an ally. It’s also partly to do with the anti-technology culture that I see rife in some British businesses. It’s partly to do with management’s reluctance to see social tools as a suite, preferring to look for a “quick fix”, which of course doesn’t exist, or engaging in tech tokenism: “Oh, we have a blog, we get 2.0.”

Yet I am also rather worried by the fact that Newsgator seems to be the only kid on the block these days. There are a number of different blogging platforms, with WordPress and Movable Type being the main contenders. Several wiki platforms, including Socialtext, ThoughtFarmer, Confluence. So why aren’t there more RSS aggregators pitching for the enterprise market? Where’s the competition? Newsgator might be doing fine, but it should be only one of a number of companies providing enterprise RSS solutions, which, as far as I can tell, isn’t the case.

Of course, there are no easy answers, because this really isn’t about the tech as much as it’s about people. It’s about demonstrating the benefits, communicating use cases, reaching and persuading decision makers, and supporting evangelists. None of that can be done easily, quickly, or simply. Can we really expect Newsgator to turn around the attitudes of the tens of thousands of people needed to create a genuine sea change? Enterprise RSS readers can help companies organise and filter information, which is a critical business function in a range of industries. But with Newsgator the last company standing, will they be able to prove RSS’ worth before it’s too late.

Set-top box and game console as stealth RSS adoption tools

Recently, I’ve been devoting too much of my quality time to twiddling with my MythTV setup. It gives my old Dell Latitude CPx PIII machine something useful to do. After getting the system up and running, I went the full monty and installed the Myth plugins, which turned a neat little free TiVo-esque setup into so much more, like a media centre with RSS goodness. I just wish that I could have my TV or radio playing in a small window as I do that. And the Myth weather centre with the great satellite animation beats anything I can easily get on any UK website. (The BBC site is getting better, but the navigation is a mess.)

UPDATE: Just as I was thinking about RSS on set-top boxes, I found this story about the Associated Press creating an RSS news feed for the Nintendo Wii. Wow. Except, it’s not RSS. I assumed news feed, meant it was powered by RSS. No, my gaming friends tell me. Still, an interesting way to syndicate news, no matter what the technology. Gizmodo has some screen shots. Nice mash up. Wii owners, let us know how this works.

People talk about RSS being an edge case activity, but that really misses the point. RSS is a powerful tool in its own right, but now, we’re seeing how RSS really unlocks your content from your website, opening up a world of syndication opportunities. It will be the applications where RSS is invisible to the user that really drive adoption, and media companies are only now beginning to scrape the surface.

Technorati Tags: ,

Is Flock the ultimate blogging tool for journalists? Almost.

I first used Flock last year after meeting Chris Messina in Paris. He was working to get the word out about the read/write browser at the time. I really liked the idea, partially because it just makes sense as a concept. With blogs, photo-sharing sites Flickr and social bookmarking sites such as Del.icio.us, it makes sense to have a support for these social tools on the browser level.

I have to admit. I downloaded it in December, wrote one blog post and quickly decided that it wasn’t ready for prime time. The tools didn’t work as advertised. I couldn’t even get it to work with my Flickr account, and it made life more difficult not easier.

That was then. This is now. A few weeks ago as I was looking for an RSS reader and other blogging tools to make life easier for my new colleagues at the Guardian. I downloaded Flock again. It’s now my default browser at work. The RSS reader alone is pretty good. RSS is the most under-utilised technology for jourrnalism bar none. For journalists wanting to use RSS, Flock is definitely worth a download (and this article is worth a read). It’s not as full-featured as NetNewsWire, but it’s damn good.

And from a blogging standpoint, it’s better than Sage, my favourite RSS plug-in for Firefox. If you see a post in your feed reader you want to blog, just click the blog button and up pops a window for a new blog post.

I actually like the uploader tool for Flickr photos better than Flickr’s own tool, although truth be told I haven’t used the Flickr uploader in a few months. But even more than the uploader, I like the fact that with a click, I can create a new blog post from my Flickr photos. I can easily see the pictures of my Flickr friends, too, which is a nice feature for personal use.

It has all the search functionality of Firefox and more. You can also set it to search your local history. It has all of the search plug-ins from Firefox.

OK, that was the good. Now for the bad, or at least the work in progress. I liked the spell checker because as you well know if you’ve read Strange for a while, I really benefit from a good editor. However, I discovered just yesterday that it puts span tags around the words it questions or changes. Well, initially, I just saw all the span tags and wondered WTF? It was only after a quick Google that I discovered it was the spell checker that was spawning the spans. It doesn’t look like a new problem, blog posts about it since the summer. I hope it gets fixed.

Suw downloaded Flock after finding Firefox 2.0 broke her can’t-live-without session saver plug-in. Here are her impressions:

I am finding that it isn’t behaving well when posting to a blog either – it just sits there and tries to post without ever completing the action (even though it does post). As you say, minor but annoying.

I also have a problem with the behaviour of their search bar – the sub-menu comes up whenever you click in the search area, instead of when you click on the G, (which is Firefox behaviour) meaning that when I am trying to select all by triple-clicking, it doesn’t work so well.

I have to admit, I am still liking Firefox better than Flock, but determined to still give it an honest trial

The HTML code is not entirely clean. I’m just looking at the source code of this post. The code definitely needs a tidy up.

But it’s getting there. Beginning bloggers could definitely do worse, and journalists who find Movable Type or WordPress’s interface daunting or difficult will find it much easier. It’s come a long way in the last year. I’m hoping that development continues and the bugs and quirks get ironed out.

technorati tags:, , ,

Blogged with Flock

Fighting ‘feed intimidation syndrome’

Tammy Green takes my post about RSS overload and turns it into a great guide for people who want to start using RSS but really aren’t sure where to start.

I agree with Tammy that the blogosphere, and RSS, can be very intimidating for those who are just starting to feel their way, and think her suggested methodology is eminently sensible:

  • Start with a list people or authors whose opinions you know and respect, and then check if these folks have blogs.
  • Subscribe to their feeds, if they have them…
  • Live with the feeds you’ve found for a few days and then ruthlessly delete those that don’t add value to the topic you’re pursuing.

Tammy has some other intermediate steps, but that last one is the one that I think is both the most important, and the hardest.

How metafeeds will lead the way to RSS nirvana. Maybe.

I have blogged before about RSS overload, the problem of simply having too many feeds in your aggregator to be able to read them all. Now Bill Burnham gives it a name, Feed Overload Syndrome, and discusses how “RSS threatens to sow the seeds of its own failure by creating such a wealth of data sources that it becomes increasingly difficult for users to sift through all the “noise” to find the information that they actually need.”

He then describes the problem in detail and discusses possible solutions. Syndicating the results of keyword searches instead of actual blogs, he says, is not an ideal approach for three reasons: many RSS feeds are excerpt not full post, thus preventing comprehensive indexing; keyword searches become less effective the more data you index; keywords can have multiple meanings which produce noise in the results.

The new Technorati tag system is also ‘fundamentally flawed’ in his view:

The problem at the core of tagging is the same problem that has bedeviled almost all efforts at collective categorization: semantics. In order to assign a tag to a post, one must make some inherently subjective determinations including: 1) what’s the subject matter of the post and 2) what topics or keywords best represent that subject matter. In the information retrieval world, this process is known as categorization. The problem with tagging is that there is no assurance that two people will assign the same tag to the same content. This is especially true in the diverse “blogsphere” where one person’s “futbol” is undoubtedly another’s “football” or another’s “soccer”.

I agree that this is a big problem with tagging, if what you are aiming to achieve is a flawless, cross-referenced database of blog posts. In an ideal world, that would be nice, but this is not an ideal world and people are used to the internet not working quite right. Users learn how to rephrase their search terms to improve results and once Technorati allow for more complex tag searches or starts to produce clustered search results then semantic issue becomes less important. (Although I doubt they will ever become irrelevant regardless of what is done.)

Instead, Bill Burnham believes that the way to RSS nirvana is through the use of metafeeds – “RSS feeds comprised solely of metadata about other feeds”.

Combining meta-feeds with the original source feeds enables RSS readers to display consistently categorized posts within rich and logically consistent taxonomies. The process of creating a meta-data feed looks a lot like that needed to create a search index. First, crawlers must scour RSS feeds for new posts. Once they have located new posts, the posts are categorized and placed into a taxonomy using advanced statistical processes such as Bayesian analysis and natural language processing. This metadata is then appended to the URL of the original post and put into its own RSS meta-feed. In addition to the categorization data, the meta-feed can also contain taxonomy information, as well as information about such things as exact/near duplicates and related posts.

RSS readers can then request both the original raw feeds and the meta-feeds. They then use the meta-feed to appropriately and consistently categorize and relate each raw post.

The benefits of using metafeeds as outlined by Bill look great. You would be able to find related documents, eliminate duplicates, create custom taxonomies, combine metafeeds and have your information “consistently sorted and grouped into meaningful categories”.

I have to admit, that sounds great. It would be wonderful to be able to create complex search strings and to get a feed back from the web that would contain only relevant posts and no duplicates. It would indeed be a form of RSS bliss.

It won’t, however, solve the problem of RSS overload – it is likely that it will just make it worse. Bill’s fix is a technical solution to a non-technical problem, and as such it is only half a fix.

We have always lived in a world where there was more information available than any one person can comprehend, but before email, the internet, blogs and RSS feeds, the limiting factor was not the existence of the information but gaining access to it. The form of the information limited the speed with which it could be accessed: having to go to a library, find the right book or journal, turn the pages, reading them one by one; gaining an introduction to an expert, persuading them to sit down with you and discuss the matter at hand; or doing empirical studies in order to reveal the information sought. It all took time.

Now the data we seek is easily accessible and the problem has shifted – it’s not finding information that’s the issue, it’s finding the right amount of the right information. The limiting factor is no longer access but discrimination. There is so much information available that it’s hard to know which bits to trust.

Anyone who paid attention at university learnt that the way you do library research is to cross reference your sources – you can’t trust one single source to be telling the truth so you learn to triangulate. The more sources that tell you that zebras are black and white, the more you believe it. Then you learn to weight your sources by credibility and reputation. If Learnéd Academic Journal tells you that zebras are black and white, then you feel confident that all other sources are going to agree with that, and it’s easier then to discount the Tabloid Freakshow Magazine article that claims to have discovered a purple zebra.

That’s basic research methodology. Cross reference. Consider the source. Keep a bibliography. And it’s a hard, hard habit to break, even for people who didn’t know that they were doing it.

RSS overload is partly to do with trying to triangulate the ‘truth’ from too many sources. There are many blogs devoted to Macs, for example, and the urge is to read them all to see what each one is saying, to compare the information in order to draw some conclusion as to what is most likely to be true. In blogging, there really aren’t any Learnéd Academic Journal-type sources with the sort of standing that allows you to immediately trust them. There are many reliable blogs written by many well-informed people, but it is difficult to tell which they are until you have completed your triangulation, reached your own conclusion and found that it syncs with what your now trusted blog tells you.

Of course, this is not necessarily a bad thing, as many previously trusted data sources are being shown to be less than trustworthy, but we do have to recognise that this whole process of building up a list of trusted blogs takes time and effort. Although to some degree trust can be passed on to other readers through word of mouth recommendations, we are still doing more work to locate trusted sources than we used to.

Another problem not solved by Bill’s metafeeds is that of completism. If you’ve ever met a rabid collector of stuff then you have probably met a completist, someone who just can’t bear not to have every last Star Wars toy, or every last scrap of Elliott Smith memorabilia. That’s what makes collectors collectors.

Many bloggers are completists too – information completists. To go back to the Mac example, you may rapidly decide which feeds are most reliable and which are mainly talking rubbish, but that doesn’t mean you are going to delete the rubbish feeds from your aggregator because there is the possibility, however slim, that they might just break the rumour of the G5 PowerBook that you’ve been desperately waiting for all these months.

Then there are the long link trails left for us to follow when we are researching our next post. You come across an interesting post, it contains links, which you follow, and then that contains more links which seem relevant so you follow those too… and then you check Technorati and read the posts you find there, and they lead to more and more posts and before you know it you’ve spent a day researching a blog post that is only two paragraphs long.

Information completism is dangerous – it leads to chronic information overload and can turn into a form of ‘legitimate procrastination’. Because link trails are convoluted and potentially exceedingly long, it’s easy to over-research instead of actually get on with the post.

The only cure is to accept that we are human and flawed and we cannot possibly know everything about everything. We can’t even know everything about one thing, because there is too much to know, too many perspectives to take on board, too many angles to look at it from. We cannot and should not attempt to read every post and comprehend everyone’s point of view on a subject.

Instead we should refine our lists of sources down to a few trusted writers, and let the rest go. Is the Mac idiot whose blog makes you fume really going to break news about a new G5 PowerBook? No. Ditch it. Is reading every post about RSS really going to make your post about RSS overload any better? No. Read what you need then get on with the writing.

If anything, Bill’s metafeeds could well add to the problem of RSS overload by adding more sources to the mix. Instead of cutting down the number of feeds people try to read, it will add to them by providing alternative concretions of data which supplement existing sources rather than supplant them. This is because of the third flaw in his plan – blogs are social, and his fix is technological.

Most of the blog feeds I read on a daily basis I read for social reasons rather than informational reasons. I have 56 feeds in my ‘friends/dailies’ group in NetNewsWire, another ten under ‘acquaintances’. None of these feeds have anything to do with information per se. They could not be replaced by any sort of keyword search and metafeeds would be simply irrelevant in this context. I read them because I want to know what these people are up to – they are friends or people I wish were friends.

But even here, where you would think that the territory is fairly well defined, there is a problem of bloat. Social networking is great, it allows you to meet a whole bunch of interesting people you would never otherwise have met, but widening your social circle also means you have more friends and acquaintances to keep up to date with. Whilst individuals may not expect you to read their blog, (indeed, I remain in a state of permanent surprise that anyone reads any of my blogs at all), there remains a nebulous feeling that one really ought to. I’m now connected to a ludicrous number of people, and in all honesty there is no way I can read everyone’s blog.

The problem of RSS overload is not completely technological and a technological fix will not work. Instead it is partly technological, partly cultural, partly social, and partly down to our own personality quirks and habits. Metafeeds may help us find more relevant information more easily, but they won’t cure the information overload problem. Only we can do that, by cutting down on the number of feeds we read, the number of tabs we leave open in Firefox, and the number of people whose blogs we follow.

, , , , , , , , ,

500 down, 3061 to go

I’ve been gathering feeds in my Bloglines aggregator for some time now, hoarding them like a bower bird in a tinsel shop, weaving them together into one unholy unread mess. A few months ago I had a flurry of half-hearted search activity for the perfect aggregator, and although then I think I concluded that the RSS plug-in for Firefox was nifty and that BlogMatrix Jäger was also worth a look, my nomadic non-laptop owning lifestyle of the time meant that a web-based aggregator was the only serious option, so I stayed with Bloglines.

At the beginning of this week I had 310 feeds showing around 25,000 unread posts. I had toyed with the idea of declaring RSS bankruptcy and just starting again, but I was getting increasingly unhappy with chaotic state of my feeds and deep down I knew that hitting ‘mark all posts read’ would do nothing to solve the problem in the long run.

There were two issues. Firstly, I never had enough time to sit and read all my feeds, or even to work out which ones I could safely mark as read whilst actually leaving them unread. Thus I would pick which feeds to read based on which had the lowest number of unread posts (anything in double figures was likely to get ignored, triple figures ensured I wasn’t gonna touch it for a goodly long time). Secondly, although I had made a stab at categorising them through the use of folders, they really were all over the place and utterly chaotic. This meant that ever time I glanced at Bloglines I was confronted with one fugly mess.

Aggregator crisis point had been reached.

The advantage of hanging out with well informed blog-geek Mac-obsessives is that when I whine about needing a new aggregator, I am given advice and I happily make the assumption that whatever is recommended is going to be good. So over the last couple of days I have migrated my OPML (someone, please sort out some sort of OPML standard so that I can export/import without having to manually to fix crappy, import-snafuing code) from Bloglines to NetNewsWire.

Immediately my unread headlines list diminished to less than 4500, just because NNW only pulls down the latest 30 headlines, instead of the maximum of 200 that Bloglines marks as unread before it stops counting. I managed to quickly delete 25 blogs I knew I didn’t need anymore, and easily sorted the rest into folders. Sitting now on the train back to Dorset, I’ve read through around 500 posts, because NNW caches them locally so I don’t need to be connected in order to read.

At last, I feel like I am in control of my aggregator again. Instead of feeling overwhelmed by the amount of information that I ought to be absorbing, instead of feeling scared to open my aggregator because the unread posts are gonna overtop any second and flood my poor little brain, I feel like I have a nice, tidy resource that I can dip into any time I want. Of course, much of this is an illusion, facilitated by a folder cunningly called ‘blogs/tech/stuff’ which contains pretty much everything that’s currently uncategorised, but I can cope with that act of wilful self-deception.

All this, the offline reading, the chilling out with my friends’ feeds, the feeling of regained control, has been reinvigorating. There have been blogs of friends that I’ve not read in ages because I felt like I ‘should’ be reading blogs related to work, even though frequently those are some of the least interesting blogs to read. No one can begrudge me spending a train journey reading through non-work stuff, not even me and I’m the worst workaholic I know.

Thing is, it’s reading the unrelated stuff, the fun stuff, that is important. It’s through picking up on a random comment by someone else that some how fits in just so with something that someone else said and something that I was thinking that pokes my brain and gives me that a-ha! moment that I constantly seek. It’s through faffing and playing around on the edges of things and allowing my brain to synthesise ideas without the imposition of expectation or structure that I stand the greatest chance of coming to some new understanding. It’s through finding a gem of a post that I regain/retain my love for blogging – and, doing what I do, maintaining a love for blogging is essential.

Shop by RSS with Woot!

Woot! is a webshop which specialises in “buying stuff cheap”, usually electronic gadgets, and then selling it online at a heavy discount. Each day Woot! makes one new product available on the site which stays there for 24 hours, or until they sell out. If you fancy the gadget and like the price, you have one day to buy it, then its gone forever.

Today’s gadget, for example, is a Archos Ondio 128MB MP3 Player/Ripper with FM Radio which is selling for $60, against a usual price of ~$150.

The really cool thing about Woot! is that you can get an RSS feed and shop via your aggregator. Now, that really is a bargain.

(Via A Penny For…)