We need to build on previous knowledge to innovate. Our biases make it harder.

March 9, 2021
Innovations that don't fit with our preconceptions might be rejected. (Unsplash/CDC)

Innovations that don't fit with our preconceptions might be rejected. (Unsplash/CDC)

A large new behavioral experiment suggests that human cognitive biases could limit innovation over multiple generations, shedding light on how we build on prior knowledge and how these biases might hinder our overall technological progress.

"Decades of research into cultural evolution shows that our ability to find solutions to complex problems is closely linked to our ability to build on innovations by earlier generations," said Bill Thompson, a researcher at Princeton University and lead author of the study, published March 9 in Proceedings of the Royal Society B: Biological Sciences. "As cognitive scientists, we are very interested in this process of transmission of ideas between people, because every step of the process is filtered through the mechanisms of human learning."

In the study, more than 1,200 participants played a simple problem-solving game, which involved designing arrowheads in one of four abstract virtual environments. The players were paid for the amount of food that their arrowheads could obtain, earning 10 cents for every 200 calories of food.

To maximize their earnings, participants had to tweak an arrowhead design that they inherited from an earlier generation of participants or, in the first generation, from one of several seeds. Each environment was designed so it had a single optimal arrowhead design; in mathematical terms, this made the multi-generational quest for a better arrowhead an optimization problem. The authors hypothesized that environments closer to the player's pre-existing biases would ultimately yield arrowheads that were closer to the optimum.

"Participants interacted directly with an arrowhead-design interface," Thompson told The Academic Times. "They designed an arrowhead, then clicked a button to test it out. The arrowhead flies off screen, and shortly thereafter they are informed how well it scored."

The results speak to the complex relationship between bias and cognition. The authors assumed that all participants shared pretty much the same cognitive biases — for example, a tendency to think that larger arrowheads were better. Some environments were more compatible with those biases, while others were less so. Participants did best when the environment was not at odds with their biases, generating arrowheads that were closer to the optimal design for their setting.

Researchers have uncovered the damaging impact of many kinds of bias, including how partisan bias makes us worse at evaluating health care policy, and how gender bias in hiring affects productivity and workplace culture. But under some circumstances, cognitive biases can be useful, according to Thompson.

Thompson illustrated this with the example of a coin flip. If someone flips a coin twice and it comes up heads both times, they might decide that the coin is rigged to land on one side. Yet most of us would not jump to this conclusion. Based on our experiences in the world, we know that fair coins are much more common than rigged coins, so we would probably infer that the coin is a fair one that just happened to land on heads twice. This reflects an "inductive bias" — that is, an assumption about how to interpret new information based on lots of old information we learned in the past.

"If an inductive bias is well aligned with the problem you face, then it can aid learning," Thompson explained. "If it's at odds with the problem, it can do the opposite."

Thompson acknowledged that this research has some limitations. For one thing, it concentrated on slow and steady improvements rather than massive, paradigm-shifting discoveries. "Our model focuses on limited or incremental innovations, but we recognize that this type of innovation is just one part of a broader set of possibilities," said Thompson.

The researchers did not set out to explain all the complex aspects of real-world problem-solving. But their model allowed them to zero in on specific predispositions among actual human participants. If technologies are extensions of ourselves, then they also extend our biases.

"The study examines a very simplified form of innovation," explained Thompson. "Real-world innovation is much more open-ended than this, so more work is required to understand how these findings might generalize to more complex problems in social contexts. But a benefit of studying this simplified setting is that it allows us to precisely assess the consequences of biases in learning and memory, and to show that these factors can influence what groups discover even in a highly constrained problem."

In his future research, Thompson hopes to account for individual variation in cognitive biases. He sees this paper as a useful addition to the study of social learning and innovation. "The experimental design — measuring people's biases and then constructing tasks that are more or less aligned with those biases — could be applied to other more general biases in the future," he said. "An environment could reinforce people's biases by rewarding the designs that people implicitly favored."

"These are simple mathematical models and behavioral experiments, so more work is required to translate these insights into applied settings," said Thompson. "But the study reinforces the importance of mechanisms that alleviate bias and facilitate learning in creative contexts."

The paper, "Human biases limit cumulative innovation," published in Proceedings of the Royal Society B: Biological Sciences, was authored by Bill Thompson and Thomas Griffiths, Princeton University. 

We use cookies to improve your experience on our site and to show you relevant advertising.