In Experience and validity of clinical judgment: The illusory correlation, Robyn Dawes explores the relationship between experience and accuracy.
There is research about the relationship between experience and diagnostic and predictive accuracy, and about the validity of interviewing people to find out what they are like. Garb has recently summarized the research on experience and accuracy. There is no relationship between years of clinical experience and accuracy of judgment. A report of a task force of the American Psychological Association convened in the early 1980s noted that there was no evidence that professional competence is related to years of professional experience.
And yet we seek experienced people to be our teachers, executives, and political leaders.
Ben Franklin is oft quoted as saying “experience is the best teacher,” the second clause reads “and fools will learn from no other.” Only Franklin didn’t say “the best teacher” he said “dear teacher,” which was clearly intended to mean expensive.
The 10,000-hour rule, popularized by Malcolm Gladwell and based on Anders Ericsson’s study, The Role of Deliberate Practice in the Acquisition of Expert Performance, states that in order to become an expert, one must have 10,000 hours of deliberate practice under their belts. This has been highly disputed by many, including Ericsson himself. There’s no question that practice is necessary for improvement, but 10,000 hours isn’t a magic number that wields the power of universal application.
Something else to ponder: why is it that we often forget to account for the length of time that an expert has been out of practice in their field?
So does experience really make you an expert? What does it actually mean to be one? It turns out, we don’t learn from experience in many contexts.
The analysis of what we learn and why we learn it, however, quickly yields sobriety about embracing generalizations about the effect of experience on learning across all contexts. For example, learning to sit in a chair, become a chess grandmaster, make a correct medical diagnosis, or avoid a war are quite different processes. The word “learning” is, of course, common to all, but close examination reveals that it means little more than that someone with no experience whatsoever could not accomplish any of these tasks.
Dawes illuminates this highly contrarian idea through quite unremarkable human behaviours like sitting in a chair and driving.
What then are the differences? First, consider sitting in a chair. It is a motor skill. It is done automatically. It does not involve any conscious hypotheses. It is clearly learned through early experience that provides immediate feedback about failure. Finally, it is not taught in the sense that one person conveys a verbal or mathematical description to another about how to do it. (In fact an amusing exercise is to attempt to write such a description, convince somebody else to follow your instructions explicitly—and then watch the person fail.) Driving a car has many similar characteristics. For example, steering it in a straight line is accomplished by very tiny discrete adjustments of the steering wheel that are not accomplished consciously (Ehrlich, 1966). (The “weaving” behavior of drunk drivers is often due to the impairment of these movements, rather than to any visual problem.) The skills needed to perform these slight movements are attained only through experience driving; in fact, most complete novices on the first driving lesson alternate between going toward the ditch and almost crossing the center line—much to the surprise and consternation of their novice teachers, who themselves may be unaware of their own “tremorous” movements of the steering wheel. As with sitting in a chair, explicit verbal instructions to someone else about exactly how to drive a car could result in disaster for the person who follows them rigidly.
Consider the curious thing that happened during the Paris Wine Tasting of 1976, alternatively known as the Judgement of Paris (its name was inspired by a story in Greek mythology). During this blind taste competition, French wine experts judged ten different reds and ten different whites. Contrary to the strongly held belief that France produced the finest wines, it was the California wines that received the highest scores. Not only did the shocking results of the competition call into question the supposed superiority of French wine, but it served as a reason for people to wonder what authority an expert had over a casual wine drinker.
Abstract of Experience and validity of clinical judgment: The illusory correlation
Mental health experts often justify diagnostic and predictive judgments on the basis of “years of experience” with a particular type of person. Justification by experience is common in legal settings, and can have profound consequences for the person about whom such judgments are made. However, research shows that the validity of clinical judgment and amount of clinical experience are unrelated. The role of experience in learning varies as a function of what is to be learned. Experiments show that learning conceptual categories depends upon: (1) the learner’s having clear hypotheses about the possible rule for category membership prior to receiving feedback about which instances belong to the category, and, (2) the systematic nature of such feedback, especially about erroneous categorizations. Since neither of these conditions is satisfied in clinical contexts in psychology, the subsequent failure of experience per se to enhance diagnostic or predictive validity is unsurprising. Claims that “I can tell on the basis of my experience with people of a particular type (e.g., child abusers) that this person is of that type (e.g., a child abuser)” are simply invalid.
Still curious? Try reading, The Ambiguities of Experience. If you want to learn more about Dawes, check out his book Everyday Irrationality: How Pseudo-Scientists, Lunatics, And The Rest Of Us Systematically Fail To Think Rationally.