A patient having his blood pressure taken by a...
A patient having his blood pressure taken by a physician. (Photo credit: Wikipedia)

Consumers rely on crowdsourced reviews to make decisions on many products and services. It should be no surprise that many patients use the experiences of others to select their doctors. Choosing a competent and experienced physician, however, may be more important than buying a pair of shoes or a set of stereo speakers.

A recent study of the online ratings of urologists finds that online reviews are scant and overwhelming positive. Moreover, protecting the anonymity of the reviewer may question the veracity of the reviewer.

The study, published online in the Journal of Urology, analyzed the online ratings and reviews of 500 randomly selected urologists. The researchers created composite ratings from 10 popular physician review websites (PRWs) and categorized the comments from the most popular PRW. Extremely negative or extremely positive comments were defined as being overly flattering or overly rude, non-constructive comments.

The researchers reported that nearly 80% of the urologists had at least one rating (range, 0-64), although composite scores were based on just 2.4 ratings per physician, on average. Eight-six percent of the ratings were positive, of which, 36% were highly positive. Just more than half of the written comments (53%) were positive or extremely positive.

Given that composite ratings were based, for the most part, on a relatively small number of submitted reviews, the researchers conclude that average ratings are highly volatile to single reviewers. In addition, some of the PRW review scores were based on other dimensions of patient satisfaction, such as accessibility and office wait time, not physician competence. Moreover, the anonymity of the crowdsourced reviews gave the researchers some pause on their veracity:

Ratings can be submitted anonymously on most PRWs, so many physicians fear that angry patients or competitors may abuse these sites.

Quoted on National Public Radio, Chandy Ellimoottil, the lead author of the study suggests that with so few reviews, a single angry patient can ruin a doctor’s reputation. The anonymity of these reviews also makes it possible for doctors (or their office staff) to self-evaluate.

The results of this study are somewhat disturbing but not altogether surprising. Similar studies of crowdsourced post-publication ratings and reviews (e.g. article ratings, journal comments, tweetsbook reviews, Facebook likes) provide similar results — crowdsourced reviews are few, mostly positive, potentially fraudulent, and provide little detail.

When trusting the wisdom of the crowds, its important to understand what is meant by “crowd.”

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

14 Thoughts on "Online Reviews of Doctors — Scant, Volatile, and Overwhelmingly Positive"

I’m concerned that these reviews would match how I’ve read reference librarians get evaluated in research studies. The main traits for positive evaluations are friendliness and likability rather than competence. Users prefer a friendly librarian who gives incorrect answers rather than a gruff librarian with the right ones. My opthamologist is a cold fish, but I stay with him because he gives evidence of expertise and concern about my health.

I did not realize that finding a competent doctor was a problem. In any case I cannot figure out what these studies are intended to show. That consumers should not use reviews? Reviews are a wonderful innovation. That consumers are stupid? I do not think so.

Per previous discussions here I would be interested in knowing why a given doctor gets a huge number of negative reviews.

I can speak on a personal level that finding a competent doctor is often a problem. It seems the best way to find one is by personal recommendation, either from a friend who sees that doctor, or from a different doctor that one already sees. If one is new in an area, neither of these channels are open and one ends up in something of a trial and error methodology. And even with a network of personal recommendations, what one is seeking from a doctor (paternalistic confidence, a frank discussion of data, etc) varies quite a bit from person to person. And I’ve found that docs often recommend someone from their own hospital or network, rather than the best choice in the area.

But as noted in my other comment below, questioning the validity of commenting systems is not the same thing as implying that consumers are stupid. It’s a system that offers no tangible rewards for the work done, so why would anyone leave a comment? I buy products online and am inundated by requests from the company that has sold it to me to go back to their site and leave a review. What would my motivation be for spending my rare and precious free time to do that? I think looking at the psychology of commenting and the accuracy and reliability of the results is a reasonable activity.

I am fascinated by you implicit claim that most doctors are incompetent, such that finding a competent one is a problem. My conjecture is that this is more about matching personalities than medical competence. But if you have data to the contrary I am sure the AMA would like to see it.

Beyond that I think it is hugely ironic that someone who depends on voluntary reviews for a living would claim that voluntary reviews are worthless. Heal thyself. In both cases the people who read the reviews are not stupid. Reviews provide useful information, these goofball studies to the contrary notwithstanding.

I have nowhere claimed that most doctors are incompetent.

But having a family member in pain who went from doctor to doctor for close to a year seeking a diagnosis, and having many give up in defeat, finding one who, at least in our case, had the level of competence necessary to bring about a cure, was indeed a difficult task. And I don’t think you should belittle the value of “matching personalities” as the interaction and communication between doctor and patient is an important part of the process.

Again, you should really stop putting words in my mouth. At no point did I call people who read reviews “stupid”, nor did I claim that reviews are “worthless”. I suggested that one should try to rationally and objectively test the accuracy and reliability of the process, and that the psychology of the behavior behind it is also worthy of study. One would think that given your career studying scientists and their behavior, that this sort of research would appeal to you, rather than just assuming that the system is a good one and taking it on faith.

And your sense of irony is misplaced. I rely on a set of reviews that are carefully vetted and recruited with editorial oversight. There are clear differences between seeking expert guidance and in throwing open the doors to all comers. Furthermore, the research peer review system offers better incentives for reviewers than the joy of helping Amazon sell more products.

Unlike you a lot of people seem willing to volunteer their rare and precious time for the public good. Wikipedia alone represents between ten million and a hundred million hours of anonymous volunteer time. People like to help. They want to communicate. That is the core of the social revolution and we do not need your pyschologist to understand this.

Indeed there are many people with more spare time than I these days. But just because they’re willing to volunteer their time does not mean the information they provide is informed and correct. Are you suggesting that Wikipedia is free from errors? Are you absolutely certain that the motivation behind every single comment left on every single website is altruism?

If this system is so spectacular and does such good for mankind, why is it unworthy of study? Is it wrong to exhibit curiosity about the world around oneself, and to desire to better understand it? Wouldn’t we all be better served by a better understanding of such systems, ways to make them more accurate, ways to encourage reluctant participants like me to join in? Or are things exactly right just as they are, the perfect world with no chance for improvement?

This particular study is similar to one we discussed a few years ago, noting that the average grade for things on the internet is 4.3 stars out of 5:
http://scholarlykitchen.sspnet.org/2009/11/16/how-meaningful-are-user-ratings-this-article-4-5-stars/

Gaming aside, it’s unclear how well these sorts of ratings and commenting systems actually work to evaluate anything. There seems to be a strong draw for some segment of the population that finds self-validation in declaring to the world that they made the right choice in their decision.

The incentive system for self-selected reviewers favors reviews by the highly satisfied, the highly dissatisfied, or the biased. The average rating is nearly worthless as a statistic. I’m amazed any responsible site would even publish it. Isn’t there an independent organization that can do a scientifically sound study of user satisfaction? Given the dangers to their reputations in the current ratings, I’d think the doctors would want this.

Thank you, Phil Davis, for this post. As an oculoplastic surgeon/ophthalmologist at an academic medical center, I don’t pay much (actually, any) attention to such reviews. My practice is busier than I can manage already. However, I know that my colleagues in private practice monitor their online profiles assiduously, and live in fear of that one negative review that can sink their numbers. Methodologically, I agree with the comments above that the reviews are worthless. As regards finding a “good doctor,” that’s easy: find out from whom physicians and nurses receive their care. It’s analogous to the USNWR rankings of medical centers and various specialties, which are methodologically suspect. Find out where doctors go when they get sick. The results are often surprising.
By the way, thanks for the Scholarly Kitchen. As an emeritus CEO at our institution and now a journal editor, I find many of the postings most helpful.

I’m reminded of the old saw about how to find a good barber. Go into the barber shop and pick the one with the worst haircut. Why? Because they cut each other’s hair…

The Pew Research Center released its Health Online 2013 report today. At present, respondents do not rely heavily on physician review websites:

Health-related reviews and rankings continue to be used by only a modest group of consumers. About one in five internet users have consulted online reviews of particular drugs or medical treatments, doctors or other providers, and hospitals or medical facilities. And just 3-4% of internet users have posted online reviews of health care services or providers.

http://pewinternet.org/Reports/2013/Health-online/Summary-of-Findings.aspx

But most people are not buying drugs and medical treatments so if 20% of internet users are reading these reviews it might be 100% of those consuming them. By the same token the number of people looking for a doctor is very small. Most people only choose a doctor a few times in their lives.

Note that I am not defending rankings as the data is indeed too sparce and specific for that. The point is that reviews often contain useful information. My wife is an internet shopper and she thinks that people who buy important stuff without looking at the reviews are fools. I agree.

Comments are closed.