Two Ways to Keep Your Data from Tricking You
The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you consent to the use of cookies. 

Human beings are experts at motivated reasoning. When Chase Utley of the Dodgers broke Mets player Ruben Tejada’s leg with a hard slide into second base, Dodgers fans saw a player who was aggressively helping his team to win a playoff game, while Mets fans saw a dirty play that deserved punishment. Fans of each team had access to the same data, but differed in their analysis.

Unfortunately, the mechanisms of motivated reasoning kick in unconsciously from the moment we look at data. As a result, there is a tendency to see what we expect to see. That is, there is a danger that data will not lead us to think differently, but instead to solidify existing beliefs that really should have been challenged.

In order counteract motivated reasoning, it is crucial to start treating data the way scientists do. Scientific method starts by making predictions about what you would expect to observe if a particular belief about the world is correct. If that prediction is consistent with the data, then you can continue to hold on to that belief. But, if the prediction is inconsistent with the data, then you have to revise your understanding of the world and generate new hypotheses to test.

A second reason why it is important to make predictions about what you expect to see first, before looking at the data, is that patterns of data are often quite subtle. Often, we imagine that there will be some big, obvious pattern. For example, perhaps people will click on red buttons more than blue buttons.   These effects of a single factor (like the color of a button) are called main effectsin statistics lingo.

It turns out that there just aren’t that many main effects in the world, and most of them are ones we know about already. For example, people have already spent lots of time looking at the ways men and women differ. That is a main effect.

Instead, most of the key insights in data involve what are called interactions. The best way to tell that you have an interaction is that when someone asks you whether a particular factor matters, you have to say, “It depends.” What it depends on is the interaction.

For example, studies of advertising effectiveness ask whether it’s better to give people a message focused on positive benefits they will get from using a product or negative problems the product will avoid. It depends. If the product is one that is generally associated with desirable things (like cars or lipstick), then it is better to focus on the benefits the product will provide. If the product is one (like medications or diapers) that is associated with undesirable things, then focusing on the problems the product will avoid is better.

The problem is that as the number of different things you have measured grows, the number of combinations you have to test to find these interactions grows as well. As a result, the best way to find these interactions is to approach the problem scientifically and to develop questions that lead you to examine your data looking for particular interactions.

You might say that you can’t approach your business scientifically, because good scientists can run controlled studies to test their beliefs, while you cannot run that many experiments with your business beyond the occasional test on a website. Remember, though, that astronomy and geology are perfectly good sciences, even though they have to focus mostly on gathering information from the world as it is rather than running experiments. Just because your data did not result from a true experiment doesn’t mean you can’t approach it like a scientist.

In the end, data can be a powerful source of new insights for your company, but only if you allow it to change your existing beliefs rather than reinforce them.

 

This blog first appeared on Harvard Business Review on 10/20/2015.

View our complete listing of Human Capital Analytics blogs.

Two Ways to Keep Your Data from Tricking You

Two Ways to Keep Your Data from Tricking You

22 Oct. 2015 | Comments (0)

Human beings are experts at motivated reasoning. When Chase Utley of the Dodgers broke Mets player Ruben Tejada’s leg with a hard slide into second base, Dodgers fans saw a player who was aggressively helping his team to win a playoff game, while Mets fans saw a dirty play that deserved punishment. Fans of each team had access to the same data, but differed in their analysis.

Unfortunately, the mechanisms of motivated reasoning kick in unconsciously from the moment we look at data. As a result, there is a tendency to see what we expect to see. That is, there is a danger that data will not lead us to think differently, but instead to solidify existing beliefs that really should have been challenged.

In order counteract motivated reasoning, it is crucial to start treating data the way scientists do. Scientific method starts by making predictions about what you would expect to observe if a particular belief about the world is correct. If that prediction is consistent with the data, then you can continue to hold on to that belief. But, if the prediction is inconsistent with the data, then you have to revise your understanding of the world and generate new hypotheses to test.

A second reason why it is important to make predictions about what you expect to see first, before looking at the data, is that patterns of data are often quite subtle. Often, we imagine that there will be some big, obvious pattern. For example, perhaps people will click on red buttons more than blue buttons.   These effects of a single factor (like the color of a button) are called main effectsin statistics lingo.

It turns out that there just aren’t that many main effects in the world, and most of them are ones we know about already. For example, people have already spent lots of time looking at the ways men and women differ. That is a main effect.

Instead, most of the key insights in data involve what are called interactions. The best way to tell that you have an interaction is that when someone asks you whether a particular factor matters, you have to say, “It depends.” What it depends on is the interaction.

For example, studies of advertising effectiveness ask whether it’s better to give people a message focused on positive benefits they will get from using a product or negative problems the product will avoid. It depends. If the product is one that is generally associated with desirable things (like cars or lipstick), then it is better to focus on the benefits the product will provide. If the product is one (like medications or diapers) that is associated with undesirable things, then focusing on the problems the product will avoid is better.

The problem is that as the number of different things you have measured grows, the number of combinations you have to test to find these interactions grows as well. As a result, the best way to find these interactions is to approach the problem scientifically and to develop questions that lead you to examine your data looking for particular interactions.

You might say that you can’t approach your business scientifically, because good scientists can run controlled studies to test their beliefs, while you cannot run that many experiments with your business beyond the occasional test on a website. Remember, though, that astronomy and geology are perfectly good sciences, even though they have to focus mostly on gathering information from the world as it is rather than running experiments. Just because your data did not result from a true experiment doesn’t mean you can’t approach it like a scientist.

In the end, data can be a powerful source of new insights for your company, but only if you allow it to change your existing beliefs rather than reinforce them.

 

This blog first appeared on Harvard Business Review on 10/20/2015.

View our complete listing of Human Capital Analytics blogs.

  • About the Author:Art Markman, Ph.D.

    Art Markman, Ph.D.

    Art Markman, Ph.D., is Annabel Irion Worsham Centennial Professor of Psychology and Marketing at the University of Texas at Austin. He got his Sc.B. in Cognitive Science from Brown and his Ph.D. in Ps…

    Full Bio | More from Art Markman, Ph.D.

     

0 Comment Comment Policy

Please Sign In to post a comment.