Why Ratings Alone Never Tell the Full Story – Messaging Studies
May 9th, 2017 by John Richelsen
Here’s a common issue we’ve come across in messaging studies – picking the true winner. So tell me if this seems familiar. You’re shown a series of messages about a brand and you’re asked to rate them on a 5 or 7 or 10-point scale as to which is the most favorable/believable/valid/etc. for a particular brand. The highest average message should be the winner, right?
We’ve seen many, many studies where the winner is ‘separated’ from the rest of the group by a fraction of a percentage point. Is the message that ranks 8.4 truly superior and game changing to the one that ranked 8.1? Why do you think these messages all tend to clump at the top? The answer is simple. Because they’re all strong messages. You’re not comparing “Brand X has top quality products” to “Brand X has substandard customer service.” No, you’re likely rating top quality products vs. superior customer service vs great value for the price and so on.
So maybe you say, “Well what if I have the respondent rank these 10 messages in order so I know which is the top one?” Again, it doesn’t tell you the whole story. How do you know if message 1 is 10% more important than message 2? Maybe it’s only a fraction better. Or maybe it’s 30% better. You’re no closer to the truth.
At Level 7, we employ a methodology which we call a Message Engagement Meter to overcome this pitfall. Here’s a case study. Level 7 was working with an agency partner to test some messaging for a state college. We showed high school sophomores and juniors 13 different messages – each wonderfully written by the agency. So how were the results? As expected, 11 of the 13 messages were between 8.5 to 9.1. In terms of ranking them in order? We were able to narrow it to six within that top 10%. Still no winner.
Using our Message Engagement Meter through paired most and least preferred, this is where we found the most motivating message – a clear number 1.
In the case of the college’s most important target audience, “A university with high quality academic programs,” was nearly three times more preferred than its closest option when compared to the other 12 statements. And in case you’re curious, high quality academics was rated second overall on the 10-point scale and third on the ranking exercise!
The moral of the story? Message testing is very different than testing product features or market tracking. It involves a well thought out process that goes well beyond the ratings approach. Because is 8.4 really better than 8.1?