Metrics, Part 4: Subjective measurements

(If you haven’t already read them, you might like to take a look at Part 1: The webstats legacy, Metrics, Part 2: Are we measuring the right things?) and Metrics, Part 3: What are your success criteria?)

In the last instalment of this series I mentioned that sometimes there just aren’t objective metrics that we can use to help us understand the repercussions of our actions. Yet much of what we try to achieve with social media projects is exactly this sort of unmeasurable thing.

No amount of understanding of page views, for example, is going to tell us how the people who have viewed that page feel about it. Did they come because they were interested? Or because they were outraged? Is your comment community a healthy one or a pit of raging hatred? Are your staff better able to collaborate now you have a wiki or are they finding it difficult to keep another datastore up to date?

There are two ways round this:

  • Surveys
  • Subjective measurement scales

Surveys are sometimes the only way you can get a sense for how well a social media project is going. All the metrics in the world won’t tell you if your staff are finding their internal blogs useful or burdensome. Random anecdotes are liable to mislead as you’ll end up relying on either the vocal evangelists who will give you an overly rosy picture, or the vocal naysayers who will give you an overly pessimistic picture. The truth is likely to be in the middle somewhere, and the only way that you can find out where is to ask people.

Survey questions need to be very carefully constructed, however, to ensure that they are not leading people to answer a certain way. At the very least, make sure that questions are worded in a neutral way and that you cover all bases for the answer options you give. Test and retest surveys as it’s so easy to get something crucial wrong!

The second way to try and measure subjective metrics is to create a scale and regularly assess activity against that scale. If you were assessing the comments in your customer-facing community, for example, you might consider a scale like this:

?????…..Lively discussion, readers are replying to each other, tone is polite, constructive information is shared

????………Moderate amount of discussion, readers replying to each other, tone is polite, some useful information shared

???………….Little discussion, readers reply only to author, tone is mainly polite, not much information shared

??……………..Discussion is moribund OR Tone of discussion negative, tone is impolite, no information shared

?…………………Abusive discussion OR Discussion is just a torrent of “me too” comments

?…………………No discussion

The idea here isn’t to create an enormous cognitive load but to try and have a consistent understanding of what we mean when we rate something 3 out of 5. This means keeping scales clear and simple, and avoiding any ambiguity such as language which could be misunderstood or which has an inherent value judgement that could sway an assessment.

I would also suggest that valuable data would be compiled by having a varied group of people rating on a regular basis and then averaging scores. That would hopefully smooth out any variation in interpretation of the scale or personal opinion.

Again, I’m going to stress that both these methods need to be put in place and measurement started before a project begins. Thinking ahead is just so worth the effort.

In all honesty, I’ve never had a client do either surveys or subjective scales. Mainly because none of them have ever really given enough thought to metrics before they start a project. It’s a shame because with services like Survey Monkey, it’s really not hard to do.