When Similarity Beats Expertise - Differential Effects of Patient and Expert Ratings on Physician Choice: Field and Experimental Study

Anne Kranzbühler, Mirella Kleijnen, P.W.J. Verlegh, M. Teerling

Research output: Contribution to journalArticleScientificpeer-review

3 Citations (Scopus)
71 Downloads (Pure)

Abstract

Background: Increasing numbers of patients consult Web-based rating platforms before making health care decisions. These platforms often provide ratings from other patients, reflecting their subjective experience. However, patients often lack the knowledge to be able to judge the objective quality of health services. To account for this potential bias, many rating platforms complement patient ratings with more objective expert ratings, which can lead to conflicting signals as these different types of evaluations are not always aligned. Objective: This study aimed to fill the gap on how consumers combine information from 2 different sources—patients or experts—to form opinions and make purchase decisions in a health care context. More specifically, we assessed prospective patients’ decision making when considering both types of ratings simultaneously on a Web-based rating platform. In addition, we examined how the influence of patient and expert ratings is conditional upon rating volume (ie, the number of patient opinions). Methods: In a field study, we analyzed a dataset from a Web-based physician rating platform containing clickstream data for more than 5000 US doctors. We complemented this with an experimental lab study consisting of a sample of 112 students from a Dutch university. The average age was 23.1 years, and 60.7% (68/112) of the respondents were female. Results: The field data illustrated the moderating effect of rating volume. If the patient advice was based on small numbers, prospective patients tended to base their selection of a physician on expert rather than patient advice (profile clicks beta=.14, P<.001; call clicks beta=.28, P=.03). However, when the group of patients substantially grew in size, prospective patients started to rely on patients rather than the expert (profile clicks beta=.23, SE=0.07, P=.004; call clicks beta=.43, SE=0.32, P=.10). The experimental study replicated and validated these findings for conflicting patient versus expert advice in a controlled setting. When patient ratings were aggregated from a high number of opinions, prospective patients’ evaluations were affected more strongly by patient than expert advice (meanpatient positive/expert negative=3.06, SD=0.94; meanexpert positive/patient negative=2.55, SD=0.89; F1,108=4.93, P=.03). Conversely, when patient ratings were aggregated from a low volume, participants were affected more strongly by expert compared with patient advice (meanpatient positive/expert negative=2.36, SD=0.76; meanexpert positive/patient negative=3.01, SD=0.81; F1,108=8.42, P=.004). This effect occurred despite the fact that they considered the patients to be less knowledgeable than experts. Conclusions: When confronted with information from both sources simultaneously, prospective patients are influenced more strongly by other patients. This effect reverses when the patient rating has been aggregated from a (very) small number of individual opinions. This has important implications for how to present health care provider ratings to prospective patients to aid their decision-making process.
Original languageEnglish
Article numbere12454
Number of pages12
JournalJournal of Medical Internet Research
Volume21
Issue number6
DOIs
Publication statusPublished - 2019

Keywords

  • decision making
  • choice behavior
  • judgment

Fingerprint Dive into the research topics of 'When Similarity Beats Expertise - Differential Effects of Patient and Expert Ratings on Physician Choice: Field and Experimental Study'. Together they form a unique fingerprint.

Cite this