When Similarity Beats Expertise-Differential Effects of Patient and Expert Ratings on Physician Choice: Field and Experimental Study.
ABSTRACT: BACKGROUND:Increasing numbers of patients consult Web-based rating platforms before making health care decisions. These platforms often provide ratings from other patients, reflecting their subjective experience. However, patients often lack the knowledge to be able to judge the objective quality of health services. To account for this potential bias, many rating platforms complement patient ratings with more objective expert ratings, which can lead to conflicting signals as these different types of evaluations are not always aligned. OBJECTIVE:This study aimed to fill the gap on how consumers combine information from 2 different sources-patients or experts-to form opinions and make purchase decisions in a health care context. More specifically, we assessed prospective patients' decision making when considering both types of ratings simultaneously on a Web-based rating platform. In addition, we examined how the influence of patient and expert ratings is conditional upon rating volume (ie, the number of patient opinions). METHODS:In a field study, we analyzed a dataset from a Web-based physician rating platform containing clickstream data for more than 5000 US doctors. We complemented this with an experimental lab study consisting of a sample of 112 students from a Dutch university. The average age was 23.1 years, and 60.7% (68/112) of the respondents were female. RESULTS:The field data illustrated the moderating effect of rating volume. If the patient advice was based on small numbers, prospective patients tended to base their selection of a physician on expert rather than patient advice (profile clicks beta=.14, P<.001; call clicks beta=.28, P=.03). However, when the group of patients substantially grew in size, prospective patients started to rely on patients rather than the expert (profile clicks beta=.23, SE=0.07, P=.004; call clicks beta=.43, SE=0.32, P=.10). The experimental study replicated and validated these findings for conflicting patient versus expert advice in a controlled setting. When patient ratings were aggregated from a high number of opinions, prospective patients' evaluations were affected more strongly by patient than expert advice (meanpatient positive/expert negative=3.06, SD=0.94; meanexpert positive/patient negative=2.55, SD=0.89; F1,108=4.93, P=.03). Conversely, when patient ratings were aggregated from a low volume, participants were affected more strongly by expert compared with patient advice (meanpatient positive/expert negative=2.36, SD=0.76; meanexpert positive/patient negative=3.01, SD=0.81; F1,108=8.42, P=.004). This effect occurred despite the fact that they considered the patients to be less knowledgeable than experts. CONCLUSIONS:When confronted with information from both sources simultaneously, prospective patients are influenced more strongly by other patients. This effect reverses when the patient rating has been aggregated from a (very) small number of individual opinions. This has important implications for how to present health care provider ratings to prospective patients to aid their decision-making process.
Project description:<h4>Background</h4>Feedback from patients is an essential element of a patient-oriented health care system. Physician rating websites (PRWs) are a key way patients can provide feedback online. This study analyzes an entire decade of online ratings for all medical specialties on a German PRW.<h4>Objective</h4>The aim of this study was to examine how ratings posted on a German PRW have developed over the past decade. In particular, it aimed to explore (1) the distribution of ratings according to time-related aspects (year, month, day of the week, and hour of the day) between 2010 and 2019, (2) the number of physicians with ratings, (3) the average number of ratings per physician, (4) the average rating, (5) whether differences exist between medical specialties, and (6) the characteristics of the patients rating physicians.<h4>Methods</h4>All scaled-survey online ratings that were posted on the German PRW jameda between 2010 and 2019 were obtained.<h4>Results</h4>In total, 1,906,146 ratings were posted on jameda between 2010 and 2019 for 127,921 physicians. The number of rated physicians increased constantly from 19,305 in 2010 to 82,511 in 2018. The average number of ratings per rated physicians increased from 1.65 (SD 1.56) in 2010 to 3.19 (SD 4.69) in 2019. Overall, 75.2% (1,432,624/1,906,146) of all ratings were in the best rating category of "very good," and 5.7% (107,912/1,906,146) of the ratings were in the lowest category of "insufficient." However, the mean of all ratings was 1.76 (SD 1.53) on the German school grade 6-point rating scale (1 being the best) with a relatively constant distribution over time. General practitioners, internists, and gynecologists received the highest number of ratings (343,242, 266,899, and 232,914, respectively). Male patients, those of higher age, and those covered by private health insurance gave significantly (P<.001) more favorable evaluations compared to their counterparts. Physicians with a lower number of ratings tended to receive ratings across the rating scale, while physicians with a higher number of ratings tended to have better ratings. Physicians with between 21 and 50 online ratings received the lowest ratings (mean 1.95, SD 0.84), while physicians with >100 ratings received the best ratings (mean 1.34, SD 0.47).<h4>Conclusions</h4>This study is one of the most comprehensive analyses of PRW ratings to date. More than half of all German physicians have been rated on jameda each year since 2016, and the overall average number of ratings per rated physicians nearly doubled over the decade. Nevertheless, we could also observe a decline in the number of ratings over the last 2 years. Future studies should investigate the most recent development in the number of ratings on both other German and international PRWs as well as reasons for the heterogeneity in online ratings by medical specialty.
Project description:As patients become increasingly involved in their medical care, physician-patient communication gains importance. A previous study showed that physician self-disclosure (SD) of personal information by primary care providers decreased patient rating of the provider communication skills.The objective of this study was to explore the incidence and impact of emergency department (ED) provider self-disclosure on patients' rating of provider communication skills.A survey was administered to 520 adult patients or parents of pediatric patients in a large tertiary care ED during the summer of 2014. The instrument asked patients whether the provider self-disclosed and subsequently asked patients to rate providers' communication skills. We compared patients' ratings of communication measurements between encounters where self-disclosure occurred to those where it did not.Patients reported provider SD in 18.9% of interactions. Provider SD was associated with more positive patient perception of provider communication skills (p<0.05), more positive ratings of provider rapport (p<0.05) and higher satisfaction with provider communication (p<0.05). Patients who noted SD scored their providers' communication skills as "excellent" (63.4%) compared to patients without self-disclosure (47.1%). Patients reported that they would like to hear about their providers' experiences with a similar chief complaint (64.4% of patients), their providers' education (49%), family (33%), personal life (21%) or an injury/ailment unlike their own (18%). Patients responded that providers self-disclose to make patients comfortable/at ease and to build rapport.Provider self-disclosure in the ED is common and is associated with higher ratings of provider communication, rapport, and patient satisfaction.
Project description:BACKGROUND:Physician ratings websites have emerged as a novel forum for consumers to comment on their health care experiences. Little is known about such ratings in Canada. OBJECTIVE:We investigated the scope and trends for specialty, geographic region, and time for online physician ratings in Canada using a national data source from the country's leading physician-rating website. METHODS:This observational retrospective study used online ratings data from Canadian physicians (January 2005-September 2013; N=640,603). For specialty, province, and year of rating, we assessed whether physicians were likely to be rated favorably by using the proportion of ratings greater than the overall median rating. RESULTS:In total, 57,412 unique physicians had 640,603 individual ratings. Overall, ratings were positive (mean 3.9, SD 1.3). On average, each physician had 11.2 (SD 10.1) ratings. By comparing specialties with Canadian Institute of Health Information physician population numbers over our study period, we inferred that certain specialties (obstetrics and gynecology, family practice, surgery, and dermatology) were more commonly rated, whereas others (pathology, radiology, genetics, and anesthesia) were less represented. Ratings varied by specialty; cardiac surgery, nephrology, genetics, and radiology were more likely to be rated in the top 50th percentile, whereas addiction medicine, dermatology, neurology, and psychiatry were more often rated in the lower 50th percentile of ratings. Regarding geographic practice location, ratings were more likely to be favorable for physicians practicing in eastern provinces compared with western and central Canada. Regarding year, the absolute number of ratings peaked in 2007 before stabilizing and decreasing by 2013. Moreover, ratings were most likely to be positive in 2007 and again in 2013. CONCLUSIONS:Physician-rating websites are a relatively novel source of provider-level patient satisfaction and are a valuable source of the patient experience. It is important to understand the breadth and scope of such ratings, particularly regarding specialty, geographic practice location, and changes over time.
Project description:Currently, the golden standard for assessing the severity of depressive and manic symptoms in patients with bipolar disorder (BD) is clinical evaluations using validated rating scales such as the Hamilton Depression Rating Scale 17-items (HDRS) and the Young Mania Rating Scale (YMRS). Frequent automatic estimation of symptom severity could potentially help support monitoring of illness activity and allow for early treatment intervention between outpatient visits. The present study aimed (1) to assess the feasibility of producing daily estimates of clinical rating scores based on smartphone-based self-assessments of symptoms collected from a group of patients with BD; (2) to demonstrate how these estimates can be utilized to compute individual daily risk of relapse scores. Based on a total of 280 clinical ratings collected from 84 patients with BD along with daily smartphone-based self-assessments, we applied a hierarchical Bayesian modelling approach capable of providing individual estimates while learning characteristics of the patient population. The proposed method was compared to common baseline methods. The model concerning depression severity achieved a mean predicted R<sup>2</sup> of 0.57 (SD?=?0.10) and RMSE of 3.85 (SD?=?0.47) on the HDRS, while the model concerning mania severity achieved a mean predicted R<sup>2</sup> of 0.16 (SD?=?0.25) and RMSE of 3.68 (SD?=?0.54) on the YMRS. In both cases, smartphone-based self-reported mood was the most important predictor variable. The present study shows that daily smartphone-based self-assessments can be utilized to automatically estimate clinical ratings of severity of depression and mania in patients with BD and assist in identifying individuals with high risk of relapse.
Project description:<h4>Objective</h4>Patient-physician discordance in health status ratings may arise because patients use temporal comparisons (comparing their current status with their previous status), while clinicians use social comparisons (comparing this patient's status to that of other patients, or to the full range of disease severity possible) to guide their assessments. We compared discordance between patients with rheumatoid arthritis (RA) and clinicians, using either the conventional patient global assessment (PGA) or a rating scale with 5 anchors describing different health states. We hypothesized that discordance would be smaller with the rating scale because clinicians likely used similar social comparisons when making global assessments.<h4>Methods</h4>We prospectively studied 206 patients with active RA and assessed the PGA (range 0-100), rating scale (range 0-100), and evaluator global assessment (EGA; range 0-100) on each of 2 visits (total visits?=?401). We compared the PGA/EGA discordance and the rating scale/EGA discordance at each visit.<h4>Results</h4>The mean?±?SD PGA/EGA discordance was 8.5?±?22.4, and the mean?±?SD rating scale/EGA discordance was 2.3?±?24.0. The intraclass correlation, measuring agreement, was higher between the rating scale and EGA than between the PGA and EGA (0.39 versus 0.31). Agreement was larger at low levels of RA activity on both pairs of measures.<h4>Conclusion</h4>Discordance between patients' global assessments and evaluators' global assessments was smaller when patients used a social standard of comparison than when they marked the PGA, suggesting that differences in standards of comparison contribute to patient-clinician discordance when the PGA is used.
Project description:<h4>Introduction</h4>Reliable team assessment has become a priority because of growing emphasis on interprofessional education and team-based care. Objective rating scales are needed to evaluate interprofessional student teams and individuals and provide real-time feedback.<h4>Methods</h4>In response to a need for behavioral rating scales, we modified the McMaster-Ottawa Scale from a 9-point to a 3-point scale and added descriptive behavioral anchors to define three levels of competency (i.e., below, at, and above expected). This modification is intended to provide consistent rating of individuals and teams in patient settings. We then developed a demonstration video using actors representing four professions to demonstrate the three levels of performance within the team. Our faculty rater tool, consisting of the modified scale and video, is designed to provide standardized ratings in interprofessional educational settings that involve patient care.<h4>Results</h4>We conducted training sessions with 40 faculty members from seven professions (medicine, dentistry, occupational therapy, nursing, pharmacy, physician assistant, and psychology) over a 2-year period. Immediately after each training session, two trained faculty observers rated interprofessional student teams as they conducted history and assessments on standardized patients. Observer scores were compared with one another and with standard expert ratings of the same teams. Trained observer ratings were consistent across the pairs. The observer training can be conducted within 60-90 minutes with the tool.<h4>Discussion</h4>Results of our implementation of the faculty rater tool confirm that the modified McMaster-Ottawa Scale is feasible to administer in clinical settings and that the demonstration video can be easily adopted for standardizing observer ratings.
Project description:<h4>Objectives</h4>To explore the extent to which doctor-rating websites are known and used among a sample of respondents from London. To understand the main predictors of what makes people willing to use doctor-rating websites.<h4>Design</h4>A cross-sectional study.<h4>Setting</h4>The Borough of Hammersmith and Fulham, London, England.<h4>Participants</h4>200 individuals from the borough.<h4>Main outcome measures</h4>The likelihood of being aware of doctor-rating websites and the intention to use doctor-rating websites.<h4>Results</h4>The use and awareness of doctor-rating websites are still quite limited. White British subjects, as well as respondents with higher income are less likely to use doctor-rating websites. Aspects of the doctor-patient relationship also play a key role in explaining intention to use the websites. The doctor has both a 'complementary' and 'substitute' role with respect to Internet information.<h4>Conclusions</h4>Online rating websites can play a major role in supporting patients' informed decisions on which healthcare providers to seek advice from, thus potentially fostering patients' choice in healthcare. Subjects who seek and provide feedback on doctor-ranking websites, though, are unlikely to be representative of the overall patients' pool. In particular, they tend to over-represent opinions from non-White British, medium-low-income patients who are not satisfied with their choice of the healthcare treatments and the level of information provided by their GP. Accounting for differences in the users' characteristics is important when interpreting results from doctor-rating sites.
Project description:Adolescence is a period of life in which peer relationships become increasingly important. Adolescents have a greater likelihood of taking risks when they are with peers rather than alone. In this study, we investigated the development of social influence on risk perception from late childhood through adulthood. Five hundred and sixty-three participants rated the riskiness of everyday situations and were then informed about the ratings of a social-influence group (teenagers or adults) before rating each situation again. All age groups showed a significant social-influence effect, changing their risk ratings in the direction of the provided ratings; this social-influence effect decreased with age. Most age groups adjusted their ratings more to conform to the ratings of the adult social-influence group than to the ratings of the teenager social-influence group. Only young adolescents were more strongly influenced by the teenager social-influence group than they were by the adult social-influence group, which suggests that to early adolescents, the opinions of other teenagers about risk matter more than the opinions of adults.
Project description:AIM:To investigate a novel observational rating protocol designed to expedite clinical diagnosis of autism spectrum disorder (ASD). METHOD:Two hundred and forty patients referred to a tertiary autism center (median age 8y 9mo, range 2y 6mo-34y 8mo; 188 males, 52 females) were rated using an adaptation of the Childhood Autism Rating Scale, Second Edition (CARS-2) based exclusively on patient observation (CARS-2obs ). Scores were compared to expert diagnosis of ASD, parent-reported Social Responsiveness Scale, Second Edition (SRS-2) and, in a selected subset of patients, the Autism Diagnostic Observation Schedule, Second Edition (ADOS-2). RESULTS:CARS-2obs distinguished patients with a clinical diagnosis of ASD from those with non-ASD neuropsychiatric disorders (mean score=18 vs 11.7, p<0.001). Severity ratings on the CARS-2obs correlated with the ADOS-2 (r=0.68, ?=0.64) and SRS-2 (r=0.31, ?=0.32). A CARS-2obs cutoff point equal to or greater than 16 demonstrated 95.8% specificity and 62.3% sensitivity in discriminating individuals with ASD from individuals without ASD in a specialty referral setting. INTERPRETATION:The CARS-2obs allows the rapid acquisition of quantitative ratings of autistic severity by direct observation. Coupled with parent/teacher-reported symptoms and developmental history, the measure may contribute to a low-cost diagnostic paradigm in clinical and public health settings, where positive results might help reduce delays in diagnosis, and negative results could prompt further specialty assessment. WHAT THIS PAPER ADDS:The Childhood Autism Rating Scale, Second Edition based on patient observation distinguished individuals with versus without autism spectrum disorder (ASD). A score equal to or greater than 16 on this assessment showed high specificity for a diagnosis of ASD.
Project description:There is a strong interest in the Veterans Administration (VA) Health-care System in promoting patient engagement to improve patient care.We solicited expert opinion using an online expert panel system with a modified Delphi structure called ExpertLens™ . Experts reviewed, rated and discussed eight scenarios, representing four patient engagement roles in designing and improving VA outpatient care (consultant, implementation advisor, equal stakeholder and lead stakeholder) and two VA levels (local and regional). Rating criteria included desirability, feasibility, patient ability, physician/staff acceptance and impact on patient-centredness and care quality. Data were analysed using the RAND/UCLA Appropriateness Method for determining consensus.Experts rated consulting with patients at the local level as the most desirable and feasible patient engagement approach. Engagement at the local level was considered more desirable than engagement at the regional level. Being an equal stakeholder at the local level received the highest ratings on the patient-centredness and health-care quality criteria.Our findings illustrate expert opinion about different approaches to patient engagement and highlight the benefits and challenges posed by each. Although experts rated local consultations with patients on an as-needed basis as most desirable and feasible, they rated being an equal stakeholder at the local level as having the highest potential impact on patient-centredness and care quality. This result highlights a perceived discrepancy between what is most desirable and what is potentially most effective, but suggests that routine local engagement of patients as equal stakeholders may be a desirable first step for promoting high-quality, patient-centred care.