Posted by 3 Comments Others

Browsing Google Product listings recently, I noticed that the seller ratings seemed much higher than you would expect based on the breakdown of reviews provided. Companies with many 1 star reviews seemed to enjoy quite high averages which didn’t seem to add up. In the interests of science, I took a closer look.

I searched Google for [television], selected the “products” tab and then clicked on the first result. As it turns out, a rather nice LED TV. This page then displays the top 10 “most relevant” sellers:

Top 10 most relevant sellers in a Google Product search

These pages then contain a seller rating:

Average seller rating

..and also a breakdown of review scores:

Breakdown of star ratings

So far so good. I collated the data from the top 10 sellers, and then worked out an average – total stars/number of reviews. This is the arithmetical average stars. Here’s the breakdown vs Google’s own average:

Rank Seller Average Based on Google’s average Based on Difference
1 Amazon.co.uk 4.5 3,712 4.8 3,726 0.3
2 Currys 2.4 600 3.5 688 1.1
3 Play.com 3.5 1,915 4.4 1,917 0.9
3 pcworld.co.uk 2.1 293 2.7 309 0.6
4 simplyelectricals.co.uk 3.7 35 3.8 35 0.1
4 Comet 3.0 1,059 3.9 1,161 0.9
5 Pixmania.co.uk 3.3 1,388 4 1,391 0.7
6 Richer Sounds 3.4 76 4.5 76 1.1
7 Tesco 3.5 1,587 4.1 1,608 0.6
8 Laskys.com 4.4 1,885 4.6 1885 0.2
9 RGB Direct 4.4 1,360 4.4 1362 0.0
10 365 Electrical 2.9 85 4.5 85 1.6

So, in all but one case, Google’s average is quite a bit higher (on average, 13% higher) than would be expected based on the pure numbers. In no case was the arithmetical average lower than Google’s. In one case, a seller with forty 1 star reviews, thirty-six 5 star reviews and nine in between displays a 4.5/5 average in Google. This just doesn’t seem right.

Why are the ratings always higher?

This is the interesting question. Google themselves offer this explanation:

“We calculate Seller Ratings using a variety of signals beyond just the arithmetic mean in order to make sure Seller Ratings reflect not only the raw quantity of review scores, but also how representative and high-quality the reviews are. We’re constantly refining how we use those signals to give our users as helpful an overview as possible.”

http://support.google.com/merchants/bin/answer.py?hl=en-GB&answer=190657

So, they’re using a weighted system of some kind. But why would it seem to always be higher?

Some sources get more weight than others

This idea can be all but discounted entirely, since even when reviews come from only one source, the average is not as would be expected.

Positive reviews are of higher quality

Perhaps Google’s weighting has determined that higher quality reviews are “better” than lower quality reviews. But that would seem to be slightly counter-intuitive. Although perhaps someone is more likely to submit a negative review than a positive one. It would then be a question of how the weighting occurs.

Similarly, it’s possible that Google uses text analysis which has discovered negative reviews to be of worse quality. But then, if you’re very unhappy, perhaps your writing skills will also be affected ;)

Positive reviews are given more weight arbitrarily

The more cynical among you will point out that seller ratings are displayed in Adwords:

“If your online store is rated in Google Product Search, you have four or more stars, and you have at least thirty reviews, you’ll automatically get seller ratings with your ads.”

http://adwords.blogspot.com/2011/04/5-simple-ways-to-improve-your-adwords.html

And those ads with positive reviews get a much greater clickthrough rate:

“On average, ads with Seller Ratings get a 17% higher CTR than the same ads without ratings.”

So, is it some kind of clever sentiment analysis? Some kind of other text quality assessment? Or is this just a ploy to attract more clickthroughs? Whichever it is, caveat emptor if you are using Google Products and factoring in seller ratings into your decisions.