Crowd wisdom can be challenging to understand. Some refer to the “madness” of the crowd, while others see the benefit of collective insight. It is almost always true that collaboration yields better results when answering questions.
Because of this, research has tried to tackle the issue of crowd wisdom. Using the law of averages, much suggests that the crowd is more right than we often give it credit for. There are also hurdles to consider and overcome when dealing with creeping inaccuracies.
For a guide that answers your questions about crowd wisdom, read on.
Explaining the Concept of Crowd Wisdom
The wisdom of the crowd is an exercise in collective insight. It works on the law of averages. Their answers will often average out when you get enough people and ask them the same question.
If you charted these answers on a bell curve, you’d expect those in the middle to represent that average. Take a random sample of 100 people, and usually, around 40-60 should hover around there. That doesn’t mean that the average answer is always destined to be correct.
That said, answers arrived at from the aggregate of the crowd have a good chance. The reason why this works is quite fascinating. It has to do with understanding wisdom.
Understanding Collective Insight
Different individuals will have different levels of wisdom when it comes to a topic. When you isolate them in a room and ask them questions, they can only rely on themselves. They aren’t able to troubleshoot or brainstorm with people with different experiences.
When you put multiple people in a room together, they can work on the problem together. This is where collective insight comes from. We’ve all heard the saying that two heads are better than one.
The concept of crowd wisdom takes that to its logical destination. The larger the group, the more they can put their experiences together to come up with an answer or solution to a problem. This is the basic foundation upon which modern democracy rests, for example.
If collective intelligence didn’t work, we wouldn’t trust voting to work. The very social reality of collaboration comes from our belief in the wisdom of the crowd. There’s also a great example that illustrates this.
The Jellybean Experiment: A Case Study in Crowd Wisdom
Many of us have experienced some variant of the jellybean experiment. For some of us, this was in school, and for others, a statistics and probability class in college. You take a large jar full of jellybeans and ask everyone in the room to guess how many there are.
You get no context clues like the exact measurements of the jar, its volume, or the density of the jelly beans. You can see the jar 360 degrees, but you can’t open it. Most importantly, there is a finite amount of time to register your guess.
This is to discourage people from trying to count each one and thus making a more exact answer. The initial results of all the guesses are as you’d expect. People’s answers are all over the place.
Some people will wildly overestimate how many jellybeans can fit in the jar. Others will be too cautious and underestimate by a significant margin. Some of these answers will come from logical reasoning and others from gut feeling.
You’ll begin to notice that as more people start to guess, the answers become more and more reasonable. Like our bell curve explanation above, it will begin to show an average. What’s especially interesting about this average is that it tends to be pretty darn close.
It Applies to More than Only Jellybeans
Variants of this experiment have been going on for over 100 years. You can substitute the jellybeans for any other question that requires multiple estimations. A popular one cited in scientific literature is Francis Galton’s experiment about how much an ox weighs.
Galton asked 800 people with some exposure to county fairs to guess the animal’s weight. A total of 0 people guessed right. When Galton averaged the answers, the final result was only 1 pound off.
This is the wisdom of the crowd and the power of collective intelligence. One person can be wildly wrong, but a reasonable and often accurate answer emerges over a large enough population.
What Are the Downsides of Relying on Crowd Wisdom?
There are some significant downsides to relying on crowd wisdom. Some have gone as far as to label the concept as the “madness” of the crowd. There are two major ways that crowd wisdom can fall short.
The first is when a question requires unique or specialized knowledge. The other is when the situation in which the questions are sought is prone to social engineering. We’ll go through each with examples.
The Monkey Wrench of Specialization
There’s a problem researchers have noticed when doing collecting data on crowd wisdom. Questions requiring specific knowledge don’t average out anywhere near enough to the right answer to be useful. It poses some serious hurdles to the utility of crowd wisdom.
In a sense, this is pretty much expected. Ask 100 people a super specific question only a brain surgeon would know. You could count the answers that are even somewhat useful on the one hand.
The problem is this issue can get extrapolated to almost any question. These range from trivia and geography to educational and government policy. The crowd’s wisdom is great at averaging out what the collective wants or believes in the big picture.
However, it falls short when understanding more specialized or complex issues. Economic policy and how our government works is a great example. Many people don’t know how the tax or legislative systems work.
They understand the big picture. There are tax brackets and ways to declare or write off things, so you pay less. The legislative system proposes and passes bills at various levels, and these can live or die based on partisan interests.
If you ask a large crowd of people for specifics on how these work, you might not get the average. In many ways, these are issues that we’ve taken for granted. This is a byproduct of categorizing and simplifying knowledge into specific specializations.
The Impact of Social Engineering and Interference
This is another major issue facing crowd wisdom. When you allow social factors to influence answers or the environment they’re given in; it can skew things. It’s a double-edged sword.
When people work together, they’re more likely to arrive at an accurate answer. By the same token, when they talk to each other and share their ideas, this act influences the final answer. People change their initial answer if they feel the crowd is going in a different direction.
This phenomenon is usually known as peer pressure. It has a serious chance of messing with the accuracy of the averages we can get from crowd wisdom. This is also the major reason secret ballots are the norm in most modern democracies.
Social Environments and Selection Decisions Matter
When it comes to the different types of wisdom, this type is very powerful, but we have to be careful. There’s another way that the wisdom of the crowd can become adulterated. This is when the social environment and selection decisions get messed with.
Here’s a common example we’ve all seen. A TV reporter goes around town asking people basic questions about history or geography. Something along the lines of “where is North Korea on a map”?
What happens is usually a montage of people answering the question incorrectly. This is often done for comedic effect but creates an inaccurate impression. The point of the questions and presentation is to make us laugh at how little people know.
However, they leave out most of the people who answered right because it’s not funny. They also never give you any context on the numbers of how many people they asked or what the ratios of answers were. This creates an inaccurate idea of what an average of knowledge might look like.
This skews how we understand and interpret collective knowledge or information. It’s called “selection bias,” where you pick and choose the people and answers you want.
The social environment where you ask these questions can also impact this issue. For example, you could ask socially sensitive questions in specific areas. Asking for collective insight on housing issues in a poor neighborhood could work.
Asking questions about crime or education that could impact an entire city using only input from wealthy areas would not. Who and where you ask can have a huge impact. This might even be larger than the impact of what kinds of questions you ask.
Gallot asked his ox weight question to 800 people at a county fair who were familiar with livestock. He didn’t drop into the middle of Manhattan with an ox and start asking city people. The true value of crowd wisdom is understanding the crowd and the context of what and where you’re asking.
An Important Note about Confidence
The importance of confidence is also key to understanding crowd wisdom. A significant amount of research has found a correlation between confidence and inaccuracy. Those who are more confident about their answers tend to drag the average accuracy down.
This is something called the Dunning-Kruger effect. In essence, the less someone knows, the more they overestimate their ability. This is because they don’t understand exactly how far off the mark they are.
Moreso, they lack the ability even to know where the starting line is. A paradoxical effect is that the more experienced someone is, the less confident they are. These knowledgeable people know exactly how much they still don’t know and how far away the limits are.
When you ask a crowd a question, you get a mix of confidence levels based on this effect. For super complex or important questions, those who know the most are most hesitant to answer. They know enough to recognize that they might not have the best answer.
The flip-side is those who don’t know as much but have full confidence are the first to propose solutions. These are often simple and don’t take into account the nuances which stopped the first person. Depending on the question, this could skew the average.
Those most willing to provide direct black and white answers are less likely to be correct. This is another major hurdle that crowd wisdom has to get over.
The Accuracy of Popular and Unexpected Answers
This is something research has also started to show. Answers which researchers didn’t expect but were surprisingly popular tend to be accurate. This is an example of how the average moves towards relative accuracy.
Some of these answers end up sitting outside of the real average. To go back to confidence, something interesting happens with predictive questions. Those who have specialized knowledge can predict how specific it is.
They can also predict how likely it is for the majority to get the question wrong. Those with only confidence often pick the most popular answer, even if it’s wrong. A common example is naming state capitals where the largest city isn’t correct.
The vast majority, and thus the average, will gravitate towards big population centers. A smaller group of knowledgeable people will answer right. This creates a surprising dichotomy.
Almost everyone will pick the large city, with the rest picking the correct one. Few people will gamble with a third choice, and those with doubts still opt for the popular option. Most people can recognize that it’s a trick question, but they don’t know enough to name the correct one.
When seen through crowd wisdom, the crowd is smart enough to know when others will get a question wrong. Going with the most popular answer outside of the average could help accuracy. It could also help combat situations where the data becomes skewed.
Going with the most unexpected popular answer could also fight the Dunning-Kruger effect.
The Key to Crowd Wisdom
There’s a lot to consider when it comes to crowd wisdom. In many situations, the average tends to point towards accuracy. That said, there are still factors to consider, such as context.
Understanding the utility of crowd wisdom means understanding the specific crowd. It also means that the true value is understanding all the results, not only the majority. For more information and articles on subjects like these, check out our other blog posts.