Tag: Thomas Kuhn

The Uses Of Being Wrong

Confessions of wrongness are the exception not the rule.

Daniel Drezner, a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University, pointing to the difference between being wrong in a prediction and making an error, writes:

Error, even if committed unknowingly, suggests sloppiness. That carries a more serious stigma than making a prediction that fails to come true.

Social sciences, unlike physical and natural sciences, finds a shortage of high-quality data on which to make predictions.

How does Science Advance?

A theory may be scientific even if there is not a shred of evidence in its favour, and it may be pseudoscientific even if all the available evidence is in its favour. That is, the scientific or non-scientific character of a theory can be determined independently of the facts. A theory is ‘scientific’ if one is prepared to specify in advance a crucial experiment (or observation) which can falsify it, and it is pseudoscientific if one refuses to specify such a ‘potential falsifier’. But if so, we do not demarcate scientific theories from pseudoscientific ones, but rather scientific methods from non-scientific method.

Karl Popper viewed the progression of science as falsification — that is science progresses by elimination of what doesn’t work and hold. Popper’s falsifiability criterion ignores the tenacity of scientific theories, even in the face of disconfirming evidence. Scientists, like many of us, do not abandon a theory because the evidence may contradict it.

The wake of science is littered with discussions on anomalies and not refutations.

Another theory on scientific advancement, proposed by Thomas Kuhn, a distinguished American philosopher of science, argues that science proceeds with a series of revolutions with an almost religious conversion.

Imre Lakatos, a Hungarian philosopher of mathematics and science, wrote:

(The) history of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But all such accounts are fabricated long after the theory has been abandoned.

Lakatos bridged the gap between Popper and Khun by addressing what they failed to solve.

The hallmark of empirical progress is not trivial verifications: Popper is right that there are millions of them. It is no success for Newtonian theory that stones, when dropped, fall towards the earth, no matter how often this is repeated. But, so-called ‘refutations’ are not the hallmark of empirical failure, as Popper has preached, since all programmes grow in a permanent ocean of anomalies. What really counts are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.

Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. But while it is a matter of intellectual honesty to keep the record public, it is not dishonest to stick to a degenerating programme and try to turn it into a progressive one.

As opposed to Popper the methodology of scientific research programmes does not offer instant rationality. One must treat budding programmes leniently: programmes may take decades before they get off the ground and become empirically progressive. Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. [The history of science refutes both Popper and Kuhn: ] On close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.

***

A lot of the falsification effort is devoted to proving others wrong and not ourselves. “It’s rare for academics, Drezner writes, to publicly disavow their own theories and hypotheses.”

Indeed, a common lament in the social sciences is that negative findings—i.e., empirical tests that fail to support an author’s initial hypothesis—are never published.

Why is it so hard for us to see when we are wrong?

It is not necessarily concern for one’s reputation. Even predictions that turn out to be wrong can be intellectually profitable—all social scientists love a good straw-man argument to pummel in a literature review. Bold theories get cited a lot, regardless of whether they are right.

Part of the reason is simple psychology; we all like being right much more than being wrong.

As Kathryn Schulz observes in Being Wrong, “the thrill of being right is undeniable, universal, and (perhaps most oddly) almost entirely undiscriminating … . It’s more important to bet on the right foreign policy than the right racehorse, but we are perfectly capable of gloating over either one.”

As we create arguments and gather supporting evidence (while discarding evidence that does not fit) we increasingly persuade ourselves that we are right. We gain confidence and try to sway the opinions of others.

There are benefits to being wrong.

Schulz argues in Being Wrong that “the capacity to err is crucial to human cognition. Far from being a moral flaw, it is inextricable from some of our most humane and honorable qualities: empathy, optimism, imagination, conviction, and courage. And far from being a mark of indifference or intolerance, wrongness is a vital part of how we learn and change.”

Drezner argues that some of the tools of the information age give us hope that we might become increasingly likely to admit being wrong.

Blogging and tweeting encourages the airing of contingent and tentative arguments as events play out in real time. As a result, far less stigma attaches to admitting that one got it wrong in a blog post than in peer-reviewed research. Indeed, there appears to be almost no professional penalty for being wrong in the realm of political punditry. Regardless of how often pundits make mistakes in their predictions, they are invited back again to pontificate more.

As someone who has blogged for more than a decade, I’ve been wrong an awful lot, and I’ve grown somewhat more comfortable with the feeling. I don’t want to make mistakes, of course. But if I tweet or blog my half-formed supposition, and it then turns out to be wrong, I get more intrigued about why I was wrong. That kind of empirical and theoretical investigation seems more interesting than doubling down on my initial opinion. Younger scholars, weaned on the Internet, more comfortable with the push and pull of debate on social media, may well feel similarly.

Still curious? Daniel W. Drezner is the author of The System Worked: How the World Stopped Another Great Depression.

Thomas Kuhn: The Structure of Scientific Revolutions

“The decision to reject one paradigm is always simultaneously the decision to accept another, and the judgment leading to that decision involves the comparison of both paradigms with nature and with each other.”

structure of scientific revolutions

The progress of science is commonly perceived of as a continuous, incremental advance, where new discoveries add to the existing body of scientific knowledge. This view of scientific progress, however, is challenged by the physicist and philosopher of science Thomas Kuhn, in his book The Structure of Scientific Revolutions. Kuhn argues that the history of science tells a different story, one where science proceeds with a series of revolutions interrupting normal incremental progress.

“A prevailing theory or paradigm is not overthrown by the accumulation of contrary evidence,” Richard Zeckhauser wrote, “but rather by a new paradigm that, for whatever reasons, begins to be accepted by scientists.”

Between scientific revolutions, old ideas and beliefs persist. These form the barriers of resistance to alternative explanations.

Zeckhauser continues “In this view, scientific scholars are subject to status quo persistence. Far from being objective decoders of the empirical evidence, scientists have decided preferences about the scientific beliefs they hold. From a psychological perspective, this preference for beliefs can be seen as a reaction to the tensions caused by cognitive dissonance. ”

* * *

Gary Taubes posted an excellent blog post discussing how paradigm shifts come about in science. He wrote:

…as Kuhn explained in The Structure of Scientific Revolutions, his seminal thesis on paradigm shifts, the people who invariably do manage to shift scientific paradigms are “either very young or very new to the field whose paradigm they change… for obviously these are the men [or women, of course] who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.”

So when a shift does happen, it’s almost invariably the case that an outsider or a newcomer, at least, is going to be the one who pulls it off. This is one thing that makes this endeavor of figuring out who’s right or what’s right such a tricky one. Insiders are highly unlikely to shift a paradigm and history tells us they won’t do it. And if outsiders or newcomers take on the task, they not only suffer from the charge that they lack credentials and so credibility, but their work de facto implies that they know something that the insiders don’t – hence, the idiocy implication.

…This leads to a second major problem with making these assessments – who’s right or what’s right. As Kuhn explained, shifting a paradigm includes not just providing a solution to the outstanding problems in the field, but a rethinking of the questions that are asked, the observations that are considered and how those observations are interpreted, and even the technologies that are used to answer the questions. In fact, often the problems that the new paradigm solves, the questions it answers, are not the problems and the questions that practitioners living in the old paradigm would have recognized as useful.

“Paradigms provide scientists not only with a map but also with some of the direction essential for map-making,” wrote Kuhn. “In learning a paradigm the scientist acquires theory, methods, and standards together, usually in an inextricable mixture. Therefore, when paradigms change, there are usually significant shifts in the criteria determining the legitimacy both of problems and of proposed solutions.”

As a result, Kuhn said, researchers on different sides of conflicting paradigms can barely discuss their differences in any meaningful way: “They will inevitably talk through each other when debating the relative merits of their respective paradigms. In the partially circular arguments that regularly result, each paradigm will be shown to satisfy more or less the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent.”

But Taubes’ explanation wasn’t enough to satisfy my curiosity.

***

The Structure of Scientific Revolutions

To learn more on how paradigm shifts happen, I purchased Kuhn’s book, The Structure of Scientific Revolutions, and started to investigate.

Kuhn writes:

“The decision to reject one paradigm is always simultaneously the decision to accept another, and the judgment leading to that decision involves the comparison of both paradigms with nature and with each other.”

Anomalies are not all bad.

Yet any scientist who pauses to examine and refute every anomaly will seldom get any work done.

…during the sixty years after Newton’s original computation, the predicted motion of the moon’s perigee remained only half of that observed. As Europe’s best mathematical physicists continued to wrestle unsuccessfully with the well-known discrepancy, there were occasional proposals for a modification of Newton’s inverse square law. But no one took these proposals very seriously, and in practice this patience with a major anomaly proved justified. Clairaut in 1750 was able to show that only the mathematics of the application had been wrong and that Newtonian theory could stand as before. … persistent and recognized anomaly does not always induce crisis. … It follows that if an anomaly is to evoke crisis, it must usually be more than just an anomaly.

So what makes an anomaly worth the effort of investigation?

To that question Kuhn responds, “there is probably no fully general answer.” Einstein knew how to sift the essential from the non-essential better than most.

When the anomaly comes to be recognized as more than another puzzle of science the transition, or revolution, has begun.

The anomaly itself now comes to be more generally recognized as such by the profession. More and more attention is devoted to it by more and more of the field’s most eminent men. If it still continues to resist, as it usually does not, many of them may come to view its resolution as the subject matter of their discipline. …

Early attacks on the anomaly will have followed the paradigm rules closely. As time passes and scrutiny increases, more of the attacks will start to diverge from the existing paradigm. It is “through this proliferation of divergent articulations,” Kuhn argues, “the rules of normal science become increasing blurred.

Though there still is a paradigm, few practitioners prove to be entirely agreed about what it is. Even formally standard solutions of solved problems are called into question.”

Einstein explained this transition, which is the structure of scientific revolutions, best. He said: “It was as if the ground had been pulled out from under one, with no firm foundation to be seen anywhere, upon which one could have built.

All scientific crises begin with the blurring of a paradigm.

In this respect research during crisis very much resembles research during the pre-paradigm period, except that in the former the locus of difference is both smaller and more clearly defined. And all crises close in one of three ways. Sometimes normal science ultimately proves able to handle the crisis—provoking problem despite the despair of those who have seen it as the end of an existing paradigm. On other occasions the problem resists even apparently radical new approaches. Then scientists may conclude that no solution will be forthcoming in the present state of their field. The problem is labelled and set aside for a future generation with more developed tools. Or, finally, the case that will most concern us here, a crisis may end up with the emergence of a new candidate for paradigm and with the ensuing battle over its acceptance.

But this isn’t easy.

The transition from a paradigm in crisis to a new one from which a new tradition of normal science can emerge is far from a cumulative process, one achieved by an articulation or extension of the old paradigm. Rather it is a reconstruction of the field from new fundamentals, a reconstruction that changes some of the field’s most elementary theoretical generalizations as well as many of its paradigm methods and applications.

Who solves these problems? Do the men and women who have invested a large portion of their lives in a field or theory suddenly confront evidence and change their mind? Sadly, no.

Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young, or very new to the field whose paradigm they change. And perhaps that point need not have been made explicit, for obviously these are men who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.

And

Therefore, when paradigms change, there are usually significant shifts in the criteria determining the legitimacy both of problems and of proposed solutions.

That observation returns us to the point from which this section began, for it provides our first explicit indication of why the choice between competing paradigms regularly raises questions that cannot be resolved by the criteria of normal science. To the extent, as significant as it is incomplete, that two scientific schools disagree about what is a problem and what is a solution, they will inevitably talk through each other when debating the relative merits of their respective paradigms. In the partially circular arguments that regularly result, each paradigm will be shown to satisfy more or less the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent. There are other reasons, too, for the incompleteness of logical contact that consistently characterizes paradigm debates. For example, since no paradigm ever solves all the problems it defines and since no two paradigms leave all the same problems unsolved, paradigm debates always involve the question: Which problems is it more significant to have solved? Like the issue of competing standards, that questions of values can be answered only in terms of criteria that lie outside of normal science altogether.

Many years ago Max Planck offered this insight: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

If you’re interested in learning more about how paradigm shifts happen, read The Structure of Scientific Revolutions.