The Science of Protecting People’s Feelings
Mar 11, 2015 2:07:06 GMT
Post by Jack Loomes on Mar 11, 2015 2:07:06 GMT
The Science Of Protecting People’s Feelings: Why We Pretend All Opinions Are Equal
By Chris Mooney
Editor's Note: Perhaps in the context of the sword, we can call this one the S8G-MyArm0ury effect.
It’s both the coolest — and also in some ways the most depressing — psychology study ever.
Indeed, it’s so cool (and so depressing) that the name of its chief finding — the Dunning-Kruger effect — has at least halfway filtered into public consciousness. In the classic 1999 paper, Cornell researchers David Dunning and Justin Kruger found that the less competent people were in three domains — humor, logic, and grammar — the less likely they were to be able to recognize that. Or as the researchers put it:
We propose that those with limited knowledge in a domain suffer from a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it.
Dunning and Kruger didn’t directly apply this insight to our debates about science. But I would argue that the effect named after them certainly helps to explain phenomena like vaccine denial, in which medical authorities have voiced a very strong opinion, but some parents just keep on thinking that, somehow, they’re in a position to challenge or ignore this view.
So why do I bring this classic study up now?
The reason is that an important successor to the Dunning-Kruger paper has just been come out — and it, too, is pretty depressing (at least for those of us who believe that domain expertise is a thing to be respected and, indeed, treasured). This time around, psychologists have not uncovered an endless spiral of incompetence and the inability to perceive it. Rather, they’ve shown that people have an “equality bias” when it comes to competence or expertise, such that even when it’s very clear that one person in a group is more skilled, expert, or competent (and the other less), they are nonetheless inclined to seek out a middle ground in determining how correct different viewpoints are.
Yes, that’s right — we’re all right, nobody’s wrong, and nobody gets hurt feelings.
The new study, just published in the Proceedings of the National Academy of Sciences, is by Ali Mahmoodi of the University of Tehran and a long list of colleagues from universities in the UK, Germany, China, Denmark, and the United States. And no wonder: The research was transnational, and the same experiment — with the same basic results — was carried out across cultures in China, Denmark, and Iran.
In the experiment (described in further detail in this previous paper), two separate people view two successive images, which are almost exactly the same, but not quite. In one of the images, there is an “oddball target” that looks slightly different. The images flash by very fast, and the two individuals have to decide which one, the first or the second, contained the target.
Sounds simple enough — but the two individuals didn’t merely have to identify the target. They also had to agree. Each member of the pair — the scientists wonkily call it a “dyad” — separately indicated which of the images contained the target, and how confident they were about that. Then, if there was a disagreement, one individual was chosen at random to decide what the right answer was – and thus, who was right and who was wrong. And then, both individuals learned the truth about whether their group decision had been the correct one or not.
This went on for 256 intervals, so the two individuals got to know each other quite well — and to know one another’s accuracy and skill quite well. Thus, if one member of the group was better than the other, both would pretty clearly notice. And a rational decision, you might think, would be for the less accurate group member to begin to favor the views of the more accurate one — and for the accurate one to favor his or her own assessments.
But that’s not what happened. Instead, report the study authors, “the worse members of each dyad underweighted their partner’s opinion (i.e., assigned less weight to their partner’s opinion than recommended by the optimal model), whereas the better members of each dyad overweighted their partner’s opinion.” Or to put it more bluntly, individuals tended to act “as if they were as good or as bad as their partner” — even when they quite obviously weren’t.
The researchers tried several variations on the experiment, and this “equality bias” didn’t go away. In one case, a “running score” reminded both members of the pair who was faring better (and who worse) at identifying the target — just in case it wasn’t obvious enough already. In another case, the task became much more difficult for one group member than the other, leading to a bigger gap in scores — accentuating differences in performance. And finally, in a third variant, actual money was offered for getting it right.
None of this did away with the “equality bias.”
So why do we do this? The authors, not surprisingly, point to the incredible power of human groups, and our dependence upon being good standing members of them:
By confirming themselves more often than they should have, the inferior member of each dyad may have tried to stay relevant and socially included. Conversely, the better performing member may have been trying to avoid ignoring their partner.
Great instincts in general — except, of course, when facts and reality are at stake.
Nobody’s saying we ought to be mean to people, or put them down when they’re wrong — or even that experts always get it right. They don’t.
Still, I think it’s pretty obvious that human groups (especially in the United States) err much more in the direction of giving everybody a say than in the direction of deferring too much to experts. And that’s quite obviously harmful on any number of issues, especially in science, where what experts know really matters and lives or the world depend on it — like vaccinations or climate change.
The new research underscores this conclusion — that we need to recognize experts more, respect them, and listen to them. But it also shows how our evolution in social groups binds us powerfully together and enforces collective norms, but can go haywire when it comes to recognizing and accepting inconvenient truths.
Source: www.washingtonpost.com/news/energy-environment/wp/2015/03/10/the-science-of-protecting-peoples-feelings-why-we-pretend-all-opinions-are-equal/?postshare=8241425986674186
By Chris Mooney
Editor's Note: Perhaps in the context of the sword, we can call this one the S8G-MyArm0ury effect.
It’s both the coolest — and also in some ways the most depressing — psychology study ever.
Indeed, it’s so cool (and so depressing) that the name of its chief finding — the Dunning-Kruger effect — has at least halfway filtered into public consciousness. In the classic 1999 paper, Cornell researchers David Dunning and Justin Kruger found that the less competent people were in three domains — humor, logic, and grammar — the less likely they were to be able to recognize that. Or as the researchers put it:
We propose that those with limited knowledge in a domain suffer from a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it.
Dunning and Kruger didn’t directly apply this insight to our debates about science. But I would argue that the effect named after them certainly helps to explain phenomena like vaccine denial, in which medical authorities have voiced a very strong opinion, but some parents just keep on thinking that, somehow, they’re in a position to challenge or ignore this view.
So why do I bring this classic study up now?
The reason is that an important successor to the Dunning-Kruger paper has just been come out — and it, too, is pretty depressing (at least for those of us who believe that domain expertise is a thing to be respected and, indeed, treasured). This time around, psychologists have not uncovered an endless spiral of incompetence and the inability to perceive it. Rather, they’ve shown that people have an “equality bias” when it comes to competence or expertise, such that even when it’s very clear that one person in a group is more skilled, expert, or competent (and the other less), they are nonetheless inclined to seek out a middle ground in determining how correct different viewpoints are.
Yes, that’s right — we’re all right, nobody’s wrong, and nobody gets hurt feelings.
The new study, just published in the Proceedings of the National Academy of Sciences, is by Ali Mahmoodi of the University of Tehran and a long list of colleagues from universities in the UK, Germany, China, Denmark, and the United States. And no wonder: The research was transnational, and the same experiment — with the same basic results — was carried out across cultures in China, Denmark, and Iran.
In the experiment (described in further detail in this previous paper), two separate people view two successive images, which are almost exactly the same, but not quite. In one of the images, there is an “oddball target” that looks slightly different. The images flash by very fast, and the two individuals have to decide which one, the first or the second, contained the target.
Sounds simple enough — but the two individuals didn’t merely have to identify the target. They also had to agree. Each member of the pair — the scientists wonkily call it a “dyad” — separately indicated which of the images contained the target, and how confident they were about that. Then, if there was a disagreement, one individual was chosen at random to decide what the right answer was – and thus, who was right and who was wrong. And then, both individuals learned the truth about whether their group decision had been the correct one or not.
This went on for 256 intervals, so the two individuals got to know each other quite well — and to know one another’s accuracy and skill quite well. Thus, if one member of the group was better than the other, both would pretty clearly notice. And a rational decision, you might think, would be for the less accurate group member to begin to favor the views of the more accurate one — and for the accurate one to favor his or her own assessments.
But that’s not what happened. Instead, report the study authors, “the worse members of each dyad underweighted their partner’s opinion (i.e., assigned less weight to their partner’s opinion than recommended by the optimal model), whereas the better members of each dyad overweighted their partner’s opinion.” Or to put it more bluntly, individuals tended to act “as if they were as good or as bad as their partner” — even when they quite obviously weren’t.
The researchers tried several variations on the experiment, and this “equality bias” didn’t go away. In one case, a “running score” reminded both members of the pair who was faring better (and who worse) at identifying the target — just in case it wasn’t obvious enough already. In another case, the task became much more difficult for one group member than the other, leading to a bigger gap in scores — accentuating differences in performance. And finally, in a third variant, actual money was offered for getting it right.
None of this did away with the “equality bias.”
So why do we do this? The authors, not surprisingly, point to the incredible power of human groups, and our dependence upon being good standing members of them:
By confirming themselves more often than they should have, the inferior member of each dyad may have tried to stay relevant and socially included. Conversely, the better performing member may have been trying to avoid ignoring their partner.
Great instincts in general — except, of course, when facts and reality are at stake.
Nobody’s saying we ought to be mean to people, or put them down when they’re wrong — or even that experts always get it right. They don’t.
Still, I think it’s pretty obvious that human groups (especially in the United States) err much more in the direction of giving everybody a say than in the direction of deferring too much to experts. And that’s quite obviously harmful on any number of issues, especially in science, where what experts know really matters and lives or the world depend on it — like vaccinations or climate change.
The new research underscores this conclusion — that we need to recognize experts more, respect them, and listen to them. But it also shows how our evolution in social groups binds us powerfully together and enforces collective norms, but can go haywire when it comes to recognizing and accepting inconvenient truths.
Source: www.washingtonpost.com/news/energy-environment/wp/2015/03/10/the-science-of-protecting-peoples-feelings-why-we-pretend-all-opinions-are-equal/?postshare=8241425986674186