When evaluating online purchases, a product’s rating and number of reviews seem helpful to an unsure consumer.
But how often do we scrutinize those figures to learn their true meaning?
The study, which appeared Aug. 21 in Psychological Science, finds that most people fail to do a simple statistical task when viewing online ratings and reviews, leading them to purchase inferior products.
More reviews, inferior quality
When shopping online, consumers engage in a type of social learning by which they become informed from the decisions of others. For example, you’re probably more likely to purchase a book at the top of the New York Times’ best-sellers list or buy an app that’s been downloaded millions of times.
But observing other people’s choices is only a part of social learning. The other is noting the resulting outcomes through mechanisms like online star ratings. But how people interpret – or fail to interpret – this data is affecting their decision-making in a negative way.
The researchers presented 138 adults with a series of cellphone cases (in pairs) to purchase. Each case was accompanied by its average star rating and number of reviews. The star ratings varied minimally, but one of the cases always had 125 more reviews than the other.
Across two experiments, the researchers found that participants preferred the case that had more reviews, despite the fact that the way they set up the experiment, that case was likely to be inferior. (The researchers assessed the product’s quality not by stars or reviews alone, but by analyzing millions of reviews on Amazon.com.)
Making the wrong call
Think about it this way. Twenty-five people review a product and award an average 2.9 rating (out of five stars). While the rating is below average, there’s a possibility that with such few reviews the product may not be as poor as indicated, Powell said.
Now imagine 150 consumers give that same product a 2.9 rating. That’s six times as many people rating the product below average. That should be a stronger signal of the product’s poor quality.
Participants took the high number of reviews as a signal of quality, said Powell, rather than as an indicator of how accurately the review score should reflect the true quality of the product. Instead of conducting a rather simple statistical analysis to arrive at that conclusion, consumers are taking the number of reviews at face value.
“What they’re doing is simply weighing cues,” Powell said. “People seem to have this belief that popularity is good and are willing to use that as an important cue when making decisions.”
Powell and his fellow researchers found evidence of this trend beyond their experiments. They examined 15 million reviews of more than 350,000 actual products on Amazon.com and found that there was no relationship between the number of reviews and its rating.
“It doesn’t necessarily mean that better things don’t become more popular,” said Powell, “but as a consumer, when you’re looking at this data point (number of reviews), it’s not telling you anything.”
Following the herd
Overcoming this bias is difficult, Powell said, because consumers find comfort in popularity.
“There are lots of contexts where following the herd is the rational thing to do,” he said. “If there isn’t enough information available, that can be a smart thing to do.
“But what we’re arguing is that you have more information than just what people did; you also have what happened – did they like it, were they happy or unhappy with their purchase.”
Powell suggests consumers should focus on whether the product’s score is above or below average – product averages usually range from 3.7 to 4, depending on the product’s category, he said – then apply that rating to the number of reviews. Examining those figures in concert should supply consumers with confidence that the product’s rating reflects its true quality.
The study’s co-authors are Jingqi Yu, Indiana University; and Melissa DeWolf and Keith Holyoak, UCLA.
By Milenko Martinovich
Consumers Misuse Online Reviews and Ratings When Buying, was originally published on the Stanford University website.