In some fields, experts are unimaginably better than novices. Master chess players, for instance, are demonstrably better chess players than beginners. A chess master can look at a board position and immediately spot useful moves.
In other fields, people with experience in some field think they can make expert judgments about that subject, are confident in those judgments, and are correct about as often as random selection.
Why do some types of expertise have solid evidence, while others seem fundamentally broken?
Much of this follows the structure of Kahneman and Klein, 2009 1. Lots more of it follows from the details in 2.
A Model of Reliable Intuition
According to current models, the skilled intuition of reliable experts is precisely recognition of learned patterns. This intuition may seem mysterious to someone observing it, and possibly even to someone experiencing it. This is because the relevant pattern recognition is frequently subverbal – the expert often cannot explain the cue for that intuition.
So: expertise is domain specific.
Note that reliable, expert intuition about a situation can only arise if the situation yields valid cues, and it is possible for a would-be expert to learn those cues. Thus, skilled intuition relies on sufficiently regular environments.
Predicting the long-term behavior of humans or human systems is not “sufficiently regular”!
Steps of attaining expertise:
- cognitive stage (system 2 action and learning)
- associative stage (system 1 learning)
- autonomous stage (primarily system 1 action)
Thus can experts think more quickly, in parallel, via associative thought.
Ways Intuition Can Be Wrong
Several biases we’ve learned: insufficient reflection, anchoring, substitution. These only happen in the “absence of skill”; a learned response to a particular task can override any of these biases. (citation?) But there’s no easy way to tell when you’re applying a learned skill, versus using a general and inappropriate heuristic.
“Subjective confidence is often determined by the internal consistency of the information on which a judgment is based, rather than by the quality of that information.” That is, the conjunction fallacy has real intuitive power.
In many cases, models of people’s judgments outperform those people. Statistically, this can be explained entirely by inconsistent judgment in those people. Across many tested tasks, assuming linear outcomes but being consistent (like a “simple” linear model) is more advantageous than letting a human be inconsistent but nonlinear.[^lens model studies] Moreover, humans are actually bad at learning nonlinear situations.
Confidence is a poor indicator of reliability!
How to Tell the Difference
Short of an in-depth study analyzing the outside view of expert opinion, the best way to tell the difference between reliable and unreliable expertise is to assess how tightly the expert’s intuition has been bound to results:
- The situation must yield reliable cues
- The expert must have prolonged, deliberate practice with the system
- That practice must have involved rapid, unequivocal feedback
- The cues and situation must not shift.
“If an environment provides valid cues and good feedback, skill and expert intuition will eventually develop in individuals of sufficient talent.” 1 [^but wait]
“We describe task environments as “high-validity” if there are stable relationships between objectively identifiable cues and subsequent events or between cues and the outcomes of possible actions.”)
In low-validity environments, simple statistical approaches frequently outperform experts, by identifying and steadily using weakly-valid cues, instead of yielding the noisy outputs characteristic of human experts in such fields. [^lens model studies]
Highly predictable environments yield higher “linear consistency” of human judgment.[^lens model studies]
How to Become an Expert
(This section is entirely about expert performance, rather than just expert judgment. Might be part of the differences between these fields.)
Expertise requires “deliberate, well-structured practice”.2
Deliberate practice requires:
- Lots of time. (on the order of 10,000 hours, in many studies)
- Setting specific goals beyond one’s current level of performance, but which can be mastered within hours.
- Appropriate, immediate, objective feedback.
- Becoming able to identify errors; continuously identifying and eliminating errors.
Deliberate practice requires steady concentration! It does not become automatic – rather, it is a way to prevent practiced actions from becoming automatic. The need for concentration limits deliberate practice to four to five hours a day; and at most an hour or so without rest.
Maintaining expertise requires deliberate practice; without well-designed practice, lots of practice does not yield expertise.
For Further Thought
- In what domains should we expect expert judgment to be correct?
- In what domains should we expect expert judgment to fail? Could these judgments be systematically improved?
- What might work when some of the conditions for deliberate practice can’t be met?
- For whatever we’re trying to improve, how can we better structure our practice?
- In particular, what does practice towards expertise in rationality look like?
- This is relevant!
-
Daniel Kahneman and Gary Klein. 2009. Conditions for Intuitive Expertise: A Failure to Disagree
[^lens model studies]: Karelaia and Hogarth, 2008. Determinants of Linear Judgment: A Meta-Analysis of Lens Model Studies ↩ ↩2 -
Ericsson, Charness, Hoffman, and Feltovich. 2006 The Cambridge handbook of expertise and expert performance. ↩ ↩2