False-belief task

Description of original task by its developers, Wimmer & Perner (1983):

Understanding of another person’s wrong belief requires explicit representation of the wrongness of this person’s belief in relation to one’s own knowledge. Three- to nine-year-old children’s understanding of two sketches was tested. In each sketch subjects observed how a protagonist put an object into a location x and then witnessed that in the absence of the protagonist the object was transferred from x to location y. Since this transfer came as a surprise they had to assume that the protagonist still believed that the object was in x. Subjects had to indicate where the protagonist will look for the object at his return.

Even as adults, we still make similar errors (whether in belief or behavior). Lesson: Just because you know something doesn’t mean others do (as in flawed perspective-taking). Or: Once you know something, it’s hard to imagine (yourself or others) not knowing it (as in expert blindspot).

Original paper:

Interesting critique of the false-belief task:

Difficulties of accommodating discrepant information

On “The Wrong Stuff – Reasonable Doubt: Innocence Project Co-Founder Peter Neufeld on Being Wrong“:

I think generally speaking it’s difficult for people to admit they’re wrong, and the higher the stakes, the more difficult it becomes. So what you really want to do is educate people that it’s OK to be wrong. It doesn’t mean you’re a fool. It’s not going to be the end of your life.

There are high social costs to being wrong, and creating a culture that values thoughtfulness and humility rather than tenacity may alleviate this phenomenon. (Ironically, one might expect this to be worse in a collectivist culture, where there could be more shame, surprise, or negative attention attached to retracting publicly stated beliefs. In contrast, individualistic cultures that celebrate different ideas might be more tolerant of changing one’s mind.)

But I think there are high cognitive and metacognitive costs to being wrong as well. Part of it could be a consequence of generating a hypothesis or belief, akin to the dangers of convincing oneself of the correctness of a guess (e.g., when taking a pretest). The more a person articulates or mentally rehearses an idea, the more s/he becomes committed to it (i.e., strengthens the memory trace, elaborates on potential explanations, draws connections to prior knowledge).

Further, someone whose self-concept is strongly linked to having the right answers might feel more threatened by realizing s/he made an error. And someone who thinks that intelligence is knowing facts rather than exercising good reasoning would probably be more disturbed by having to acknowledge getting the facts wrong.

So what does this suggest? Perhaps we should encourage more tentativeness and skepticism, an appreciation of the probabilistic nature of knowledge, comfort with staking cautious claims. Maybe we should ask people to propose multiple conditional hypotheses instead of single predictions. And most of all, we should put wrongness back in its place– linked to the idea, not the person.

How facts backfire

On “How facts backfire“:

Researchers discover a surprising threat to democracy: our brains

This has profound implications for educating the general populace. I’ve actually just been pondering the ethics of educating people up to (down to?) the trough of the U-shaped curve of learning and development.

Lately I’ve found myself coming back to Strike & Posner’s “intelligible, plausible, and fruitful” criteria for conceptual change. If our target audience doesn’t perceive these new ideas to be fruitful, they’ll have no motivation to change.

I’ve also been thinking of all these ways in which a little (or a lot) of knowledge can make learning harder: backfire, U-shaped development, expert blindspot, information overload. I’ll probably think of more to add to the list later. Given the considerable risks of this happening through so many different mechanisms, how can we equip learners against them? It seems that some of the answers may lie in influencing the learner’s affective, motivational, and metacognitive states: making errors and belief change nonthreatening, incentivizing accurate information and valid reasoning, and developing an understanding of these cognitive errors. But I’m still concerned about learners for whom this doesn’t succeed and who then get left worse off than they began.

Concerns about the LA Times teacher ratings

On “L.A. Times analysis rates teachers’ effectiveness“:

A Times analysis, using data largely ignored by LAUSD, looks at which educators help students learn, and which hold them back.

I’m a huge fan of organizing, analyzing, and sharing data, but I have real concerns about figuring out the best means for conveying and acting upon those results. Not just data quality (what gets assessed, how scores are calculated and weighed), but contextualizing results (triangulation with qualitative data) and professional development (social comparison, ongoing support).

The influence of stereotype threat on learning

On “Negative stereotypes shown to affect learning, not just performance“:

While the effect of negative performance stereotypes on test-taking and in other domains is well documented, a new study shows that the effects might also be seen further upstream than once thought, when the skills are learned, not just performed.

The power of expectation influences learning, not just performance. Next steps should examine tasks where more effortful processing is beneficial. (It’s also a fascinating demonstration of the distinction between learning and performance, often difficult to disentangle.)

Rydell, R.J., Shiffrin, R.M., Boucher, K.L., Van Loo, K., Rydell, M.T. (2010). Stereotype threat prevents perceptual learning. Proceedings of the National Academy of Sciences, 107, 14042-14047. DOI: 10.1073/pnas.1002815107