New Yorker | Jonah Lehrer | Why Smart People Are StupidFirst: understanding cognitive bias is important. Okay, now that that's out of the way, here's my problem with these Kahnenman-type studies:
West also gave a puzzle that measured subjects’ vulnerability to something called “anchoring bias,” which Kahneman and Tversky had demonstrated in the nineteen-seventies. Subjects were first asked if the tallest redwood tree in the world was more than X feet, with X ranging from eighty-five to a thousand feet. Then the students were asked to estimate the height of the tallest redwood tree in the world. Students exposed to a small “anchor”—like eighty-five feet—guessed, on average, that the tallest tree in the world was only a hundred and eighteen feet. Given an anchor of a thousand feet, their estimates increased seven-fold.
The types of questions people use in the lab to expose biases and cognitive failures is alien to the way people actually think. Many things that are clearly errors from a strictly objective calculation perspective are very useful in the real world. Useful enough that I think they may not be built-in biases, but learned adaptations.
Let's take an analog of the redwood situation. If a friend asks you if you think his suitcase is over or under 50 pounds, that's probably not idle speculation on his part. He probably knows something about the weight of the suitcase that you don't. Maybe he's used it before, and has memory of its weight in the past. In that case is makes a lot of sense to anchor your estimate somewhere around 50lbs. If the real weight of the suitcase was 10lbs or 125lbs your (rational, observant) friend wouldn't have asked you if you thought it was over or under 50lbs. In the real world that framing question is often relevant; in the lab it isn't. How valid of a conclusion is it to go "Aha!! People are distracted by the often-relevant question specifically chosen to be irrelevant in our study!"
And here’s the upsetting punch line: intelligence seems to make things worse. The scientists gave the students four measures of “cognitive sophistication.” As they report in the paper, all four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.” This trend held for many of the specific biases, indicating that smarter people (at least as measured by S.A.T. scores) and those more likely to engage in deliberation were slightly more vulnerable to common mental mistakes.A lot of the questions these researches use are structured like the "word problems" we all had for 13 years of primary and secondary education. We all got good at thinking the way the people who write word problems want us to think, which often means accepting a very simplified world view, ignoring outside knowledge, etc.*
I suspect one of the things "smart" people learn with these tests is not to overthink things: you don't want to give the right answer, you want to give the answer the test-maker wants you to give. That's often a stupid answer. For a fictional representation, refer to Law-rence Waterhouse's US Navy induction test in CryptonomiconThe questions in these research studies are presented like K12 word problems, but don't actually function like that. The researchers shouldn't be surprised that people answer them as if they are K12 word problems.
West et al. are surprised that people who did better on standardized tests had larger biases in their studies. But these are people who have trained themselves (or are naturally good) at accepting the universe of the test question. Just like a friend doesn't ask you if their 15lb bag is over 50lbs, a test writer doesn't ask you if the tallest tree is over 85ft if it's really a few hundred. That initial question is not there randomly; it's actually an additional piece of information that should give you a clue about the rest of the test.
When you expose people who are good at uncovering and using that information to a situation that is superficially the same as the tests they are accustomed to, but now being written by a person with completely different objectives to test a completely different skill set, you should not be surprised that they use their test-taking skills anyway. This is interesting, but I don't think it tells us as much about intelligence and cognitive bias as the Kahnemans, Wests, and especially Lehrers of the world would like us to think it does.