Research Highlights - February 2019

Comments Off on Research Highlights - February 2019
Research Highlights - February 2019 LoadingADD TO FAVORITES

According to new research published in the journal Psychological Science, when decision-makers evaluate and compare a range of data points they tend to neglect the relativestrength of the evidence and treat it as simply binary.  That’s true regardless of whether the data is related to health outcomes, head counts, or menu prices.  

In short, People show a strong tendency to dichotomize data distributions and ignore differences in the degree to which instances differ from an explicit or inferred midpoint.  This tendency is remarkably widespread across a diverse range of information formats and content domains.  This study by Yale researchers is the first to demonstrate this general tendency.

In a series of six studies, the researchers examined how people tend to reduce a continuous range of data points into just two categories.  They hypothesized that people would implicitly create a so-called “imbalance score,” analyzing the difference in data points that fall on one side of a given boundary and those that fall on the other side.  For instance, if people are evaluating data from different studies investigating the relationship between caffeine and health, they would quickly categorize data as either showing an effect or not, regardless of the relative strength of the evidence.

In one online study, they randomly assigned a total of 605 participants to consider a specific topic related to either scientific reports, eyewitness testimonies, social judgments, or consumer reviews. They saw a series of seventeen claims about the relationship between two variables, for example, taking a certain medication and experiencing feelings of hunger such as, “One group of scientists found that the new medication makes feeling hungry two times more likely,” and “One group of scientists found that the new medication makes feeling hungry four times less likely.”

After viewing the claims, participants then summarized the evidence, choosing the rating that best captured their overall impression.

As hypothesized, the number of strong and weak negative evidence claims subtracted from the number of strong and weak positive evidence claims was correlated with participants’ summary judgments. Their...

To continue reading, become a paid subscriber for full access.
Already a Business Briefings subscriber? Login for full access now.

Subscribe for as low as $135/year

  • Get 12 months of Business Briefings that will impact your business and your life
  • Gain access to the entire Business Briefings Research Library
  • Optional Business Briefings monthly CDs in addition to your On-Line access
  • If you do not like what you see, you can cancel anytime and receive a 100% pro-rata refund