I want to talk about an interesting overconfidence experiment that one of my professors subjected us to back in the day… So the deal is that you are given 10 numbers you are supposed to come up with guesses for, say like the current Wal-Mart stock price (you’re not allowed to check), the number of jelly beans in a jar, whatever. More specifically, you’re asked to come up with a “90% confidence interval” as your guess- in other words, give a range where you think the real number is going to be in that range 90 percent of the time. (Yes, this is a slight abuse of the typical definition of “confidence interval,” and no, I do not want an email about it. =P) What usually happens is that when the actual numbers are revealed, most people see that their success rate was far less than 90
Topics:
Jodi Beggs considers the following as important: Behavioral Economics, Economics, overconfidence
This could be interesting, too:
Lars Pålsson Syll writes Daniel Waldenströms rappakalja om ojämlikheten
Peter Radford writes AJR, Nobel, and prompt engineering
Lars Pålsson Syll writes MMT explained
Lars Pålsson Syll writes Statens finanser funkar inte som du tror
I want to talk about an interesting overconfidence experiment that one of my professors subjected us to back in the day…
So the deal is that you are given 10 numbers you are supposed to come up with guesses for, say like the current Wal-Mart stock price (you’re not allowed to check), the number of jelly beans in a jar, whatever. More specifically, you’re asked to come up with a “90% confidence interval” as your guess- in other words, give a range where you think the real number is going to be in that range 90 percent of the time. (Yes, this is a slight abuse of the typical definition of “confidence interval,” and no, I do not want an email about it. =P)
What usually happens is that when the actual numbers are revealed, most people see that their success rate was far less than 90 percent. (In case you’re curious, I was as bad at this task as everyone else.) This is viewed as a form of overconfidence, since people expected to be more correct than they actually were. (I guess the same thing happens when people are surprised by poor exam grades and such.)
There’s also a version of this exercise where all students have to do it make single point guesses and then report how many they expect to get right. Dear instructors: this is a fun thing to do in the classroom if you teach a course or course module that has to do with behavioral biases. That said, I do NOT recommend this as one of the questions.
Seriously??? pic.twitter.com/W8wLoI70ab
— Dustin Rumbaugh (@econwithdustin) October 6, 2017
I’m now incredibly curious as to whether the numbers are biased upwards or downwards on average.
You can see the post on the original site here.