Category: People

Herbert Simon on the Distinction Between What is Legal and What We Will Tolerate

You’d break the law. In fact most of us would. How can I say this with near certainty? Because if you were put in a position where the ends justified the means, the means would become acceptable.

The person who steals bread so his starving child can eat is an easy one to sympathize with. While illegal, most of us understand and even tolerate this petty behaviour.

However, would we tolerate bigger transgressions? For the answer to this question we turn to Nobel Prize winning social scientist Herbert Simon. Simon’s contributions to our growing repository of wisdom include why the principles of good management are not often followed, the role of intuition in experts, and why organizational planning is doomed to failure.

In his autobiography Models of My Life, Simon touches on the difference between what a society will tolerate and its laws.

A revolution aims at bringing about fundamental changes in institutions by employing illegal tactics. What is legal and what a society will tolerate are distinct. When there is sympathy for ends, illegal means may become acceptable and the laws against them unenforceable.

Fundamentally we seem to believe that if the means warrant the ends, they will be accepted. Killing to overthrow a dictator, which is obviously against the law, becomes acceptable at a certain point if the dictator is too horrible. The question of when it becomes acceptable, however, while easily distinguishable for the edge cases, becomes grey in the middle.

Simon continues:

If a revolution aims at overthrowing an entire legal system, the role of the illegal action is to arouse an already sympathetic population; to goad the defenders of the legal system to severity that will arouse additional sympathy; to demonstrate strength, hence to reduce fear fo the authorities and to increase fear of the revolutionaries; and finally to seize weapons and strong points. When people no longer believe that the existing laws can be enforced, the first half of the revolution has been won. There remains the task of securing for it the “right” party. This has been the common point of failure for the moderates.

There are also situations where the laws are better than what the government will enforce. Something to think about.

Thomas Kuhn: The Structure of Scientific Revolutions

“The decision to reject one paradigm is always simultaneously the decision to accept another, and the judgment leading to that decision involves the comparison of both paradigms with nature and with each other.”

structure of scientific revolutions

The progress of science is commonly perceived of as a continuous, incremental advance, where new discoveries add to the existing body of scientific knowledge. This view of scientific progress, however, is challenged by the physicist and philosopher of science Thomas Kuhn, in his book The Structure of Scientific Revolutions. Kuhn argues that the history of science tells a different story, one where science proceeds with a series of revolutions interrupting normal incremental progress.

“A prevailing theory or paradigm is not overthrown by the accumulation of contrary evidence,” Richard Zeckhauser wrote, “but rather by a new paradigm that, for whatever reasons, begins to be accepted by scientists.”

Between scientific revolutions, old ideas and beliefs persist. These form the barriers of resistance to alternative explanations.

Zeckhauser continues “In this view, scientific scholars are subject to status quo persistence. Far from being objective decoders of the empirical evidence, scientists have decided preferences about the scientific beliefs they hold. From a psychological perspective, this preference for beliefs can be seen as a reaction to the tensions caused by cognitive dissonance. ”

* * *

Gary Taubes posted an excellent blog post discussing how paradigm shifts come about in science. He wrote:

…as Kuhn explained in The Structure of Scientific Revolutions, his seminal thesis on paradigm shifts, the people who invariably do manage to shift scientific paradigms are “either very young or very new to the field whose paradigm they change… for obviously these are the men [or women, of course] who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.”

So when a shift does happen, it’s almost invariably the case that an outsider or a newcomer, at least, is going to be the one who pulls it off. This is one thing that makes this endeavor of figuring out who’s right or what’s right such a tricky one. Insiders are highly unlikely to shift a paradigm and history tells us they won’t do it. And if outsiders or newcomers take on the task, they not only suffer from the charge that they lack credentials and so credibility, but their work de facto implies that they know something that the insiders don’t – hence, the idiocy implication.

…This leads to a second major problem with making these assessments – who’s right or what’s right. As Kuhn explained, shifting a paradigm includes not just providing a solution to the outstanding problems in the field, but a rethinking of the questions that are asked, the observations that are considered and how those observations are interpreted, and even the technologies that are used to answer the questions. In fact, often the problems that the new paradigm solves, the questions it answers, are not the problems and the questions that practitioners living in the old paradigm would have recognized as useful.

“Paradigms provide scientists not only with a map but also with some of the direction essential for map-making,” wrote Kuhn. “In learning a paradigm the scientist acquires theory, methods, and standards together, usually in an inextricable mixture. Therefore, when paradigms change, there are usually significant shifts in the criteria determining the legitimacy both of problems and of proposed solutions.”

As a result, Kuhn said, researchers on different sides of conflicting paradigms can barely discuss their differences in any meaningful way: “They will inevitably talk through each other when debating the relative merits of their respective paradigms. In the partially circular arguments that regularly result, each paradigm will be shown to satisfy more or less the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent.”

But Taubes’ explanation wasn’t enough to satisfy my curiosity.

***

The Structure of Scientific Revolutions

To learn more on how paradigm shifts happen, I purchased Kuhn’s book, The Structure of Scientific Revolutions, and started to investigate.

Kuhn writes:

“The decision to reject one paradigm is always simultaneously the decision to accept another, and the judgment leading to that decision involves the comparison of both paradigms with nature and with each other.”

Anomalies are not all bad.

Yet any scientist who pauses to examine and refute every anomaly will seldom get any work done.

…during the sixty years after Newton’s original computation, the predicted motion of the moon’s perigee remained only half of that observed. As Europe’s best mathematical physicists continued to wrestle unsuccessfully with the well-known discrepancy, there were occasional proposals for a modification of Newton’s inverse square law. But no one took these proposals very seriously, and in practice this patience with a major anomaly proved justified. Clairaut in 1750 was able to show that only the mathematics of the application had been wrong and that Newtonian theory could stand as before. … persistent and recognized anomaly does not always induce crisis. … It follows that if an anomaly is to evoke crisis, it must usually be more than just an anomaly.

So what makes an anomaly worth the effort of investigation?

To that question Kuhn responds, “there is probably no fully general answer.” Einstein knew how to sift the essential from the non-essential better than most.

When the anomaly comes to be recognized as more than another puzzle of science the transition, or revolution, has begun.

The anomaly itself now comes to be more generally recognized as such by the profession. More and more attention is devoted to it by more and more of the field’s most eminent men. If it still continues to resist, as it usually does not, many of them may come to view its resolution as the subject matter of their discipline. …

Early attacks on the anomaly will have followed the paradigm rules closely. As time passes and scrutiny increases, more of the attacks will start to diverge from the existing paradigm. It is “through this proliferation of divergent articulations,” Kuhn argues, “the rules of normal science become increasing blurred.

Though there still is a paradigm, few practitioners prove to be entirely agreed about what it is. Even formally standard solutions of solved problems are called into question.”

Einstein explained this transition, which is the structure of scientific revolutions, best. He said: “It was as if the ground had been pulled out from under one, with no firm foundation to be seen anywhere, upon which one could have built.

All scientific crises begin with the blurring of a paradigm.

In this respect research during crisis very much resembles research during the pre-paradigm period, except that in the former the locus of difference is both smaller and more clearly defined. And all crises close in one of three ways. Sometimes normal science ultimately proves able to handle the crisis—provoking problem despite the despair of those who have seen it as the end of an existing paradigm. On other occasions the problem resists even apparently radical new approaches. Then scientists may conclude that no solution will be forthcoming in the present state of their field. The problem is labelled and set aside for a future generation with more developed tools. Or, finally, the case that will most concern us here, a crisis may end up with the emergence of a new candidate for paradigm and with the ensuing battle over its acceptance.

But this isn’t easy.

The transition from a paradigm in crisis to a new one from which a new tradition of normal science can emerge is far from a cumulative process, one achieved by an articulation or extension of the old paradigm. Rather it is a reconstruction of the field from new fundamentals, a reconstruction that changes some of the field’s most elementary theoretical generalizations as well as many of its paradigm methods and applications.

Who solves these problems? Do the men and women who have invested a large portion of their lives in a field or theory suddenly confront evidence and change their mind? Sadly, no.

Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young, or very new to the field whose paradigm they change. And perhaps that point need not have been made explicit, for obviously these are men who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.

And

Therefore, when paradigms change, there are usually significant shifts in the criteria determining the legitimacy both of problems and of proposed solutions.

That observation returns us to the point from which this section began, for it provides our first explicit indication of why the choice between competing paradigms regularly raises questions that cannot be resolved by the criteria of normal science. To the extent, as significant as it is incomplete, that two scientific schools disagree about what is a problem and what is a solution, they will inevitably talk through each other when debating the relative merits of their respective paradigms. In the partially circular arguments that regularly result, each paradigm will be shown to satisfy more or less the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent. There are other reasons, too, for the incompleteness of logical contact that consistently characterizes paradigm debates. For example, since no paradigm ever solves all the problems it defines and since no two paradigms leave all the same problems unsolved, paradigm debates always involve the question: Which problems is it more significant to have solved? Like the issue of competing standards, that questions of values can be answered only in terms of criteria that lie outside of normal science altogether.

Many years ago Max Planck offered this insight: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

If you’re interested in learning more about how paradigm shifts happen, read The Structure of Scientific Revolutions.

This is a World of Incentives

I thought Warren Buffett said a lot of interesting things in his recent interview with Charlie Rose.

Here are some of the bits that stood out for me.

Fairness:

BUFFETT: …I also think fairness is important and I think getting rid of promises that you can’t keep is important. I don’t think we should cut spending dramatically now. I don’t think that what I’m talking about on taxes solves the — the deficit gap at all. But I think fairness is important. I think having a sensible long-term plan is important to explain and I think having it be believable is terribly important because people don’t believe these out year things generally with Congress. They see too much of what’s happened.

The deficit as stimulus

BUFFETT: The deficit is our stimulus. You can — you can say a bridge someplace is part of that act, you can say cutting taxes is part of it as was the case in our stimulus act. But the stimulus is the government pouring more money out than it’s taking in. And we have a — a stimulus going on that’s 10 percent of GDP which we haven’t seen since World War II. So we have a huge stimulus going on. Nobody wants to call it a stimulus because that’s gotten to be a dirty word. But we have a big stimulus. So we do — in my view, whether we have a 10 percent of GDP deficit —

ROSE: Right.

BUFFETT: — which is a huge stimulus or a 12 percent or eight percent it doesn’t make much difference. I — I think that we pushed monetary policy to a level, we’ve pushed fiscal policy to the limit but fortunately the most important thing in terms of this country ever coming out of recessions has been the natural workings of capitalism and I think you’ve seen that for the last couple of years.

Following through

BUFFETT: What our leaders were saying to us then, the key players are saying we’ll do whatever it takes. And I believed it. I knew they had the power to do whatever it took and I believed they would do it.

Now, the problem about government now is that if they come out and get on the Sunday talks shows and say “I’ll do whatever it takes”, you know, people don’t believe them. And I mean, they — they — they’ve got to see action and — and here they see something like the raising the deficit limit used as a hostage for something of vital importance to the United States. And if you — you can use it as a hostage in terms of spending, you can use it as a hostage on funding on education or anything else. I mean, it isn’t limited about it; if you’ve got something that comes up like it.

Incentives

BUFFETT: But I just use it to illustrate that this is a world of incentives and we work on incentives in every way. If we work on education, in business, every other place. And what I try to think of the incentives to get somebody who comes up for re-election in a year to do something where the policy cycle goes out five years or ten years, how do you do it when the policy cycle exceeds the electoral cycle? You’ve got to make sure the electoral cycle is in the equation.

Vaclav Smil: Why America is not a New Rome

On television, modern histories of Rome lead one to think that Romans were rather well off, enjoyed a lot of free time, and commanded the largest and most powerful empire in the history of the world. That is, until the Americans came along.

America’s post WWII strategic and military dominance combined with affluence inspired comparisons to ancient Rome at its most powerful. With trouble in Iraq, mounting debt, and a teetering economy, America no longer seems invulnerable. Comparisons have shifted towards the ineffectual, bloated, later empire as Rome collapsed. Commenting on the fall, Edward Gibbon in The History of the Decline and Fall of the Roman Empire said it was “the greatest perhaps, and most awful scene in the history of mankind.”

The vision of America as a new Rome resonates with us. There are obvious parallels and amusing similarities. In Why America is not a New Rome, Vaclav Smil, a scientist and lifelong student of Roman history takes a closer look at the America-Rome analogy.

Smil excels as he methodically unearths the meaning of empire and the relevant periods-of-time necessary for meaningful comparison. Smil ends up agreeing with Patricia Schroeder’s view that an empire “means political control exercised by one organized political unit over another unit separate from it and alien to it.”

Depending on how you measure an empire, the Roman Empire wasn’t even the greatest. Consider the amount of land controlled as a proxy. Rome was the most extensive empire for only about 150 years, between 220 and 370 C.E. At the peak, the Romans controlled about 3% of ice-free land. To put this in the proper perspective, at the peak of its empire the British controlled about 23% of the ice-free land. If you were to rank empires based on land sizes throughout history, Rome would not even make it into the top 20.

After a close examination of modern US history, Smil concludes that if we are to “pay any attention to the root meaning of our words, hegemony makes a much better descriptor (of the United States) than empire. Smil quotes Schroeder in saying “Those who speak of an American empire bringing freedom and democracy to the world are talking of dry rain and snowy blackness. In principle and by definition, empire is the negation of political freedom, liberation, and self-determination.

Finally Smil offers a recent example of how the US is not cut-throat enough to be considered an empire. “Would an imperial power allow a prime minister of a country that it had recently conquered (and whose reconstruction and defence cost it thousands of lives and hundreds of billions of dollars) to repeatedly visit a neighboring nation (which had been the great power’s avowed enemy for more than a generation and which actually helped to kill some of its soldiers stationed in the neighboring land by providing lethal explosives) and to have a cordial tête-à-tête with its president, who openly calls for the destruction of the great power whose very existence he deems satanic?”

The answer, of course, is no. But that is precisely what the Americans have done by allowing Iraq to engage in diplomatic relations with Mohammed Ahmedinejad, whom Smil refers to as “president of harshly anti-American fundamentalist Islamic theocracy in Tehran.” So depending on how you look at it, America is either too nice or too weak to be considered an empire. While not an empire, that does not mean that comparisons about various aspects of society and culture cannot be made.

On innovation

Taking a closer look at the cultures proves exceedingly difficult as well. Much of what we know about Rome is unverifiable best-guesses. Of the Romans, Smil says they “were neither impressive inventors nor the leading technical innovators of their time and that their legacy of systematic, incisive intellectual inquiry compares poorly with that of their Greek predecessors and their Chinese contemporaries.” American creativity, on the other hand, is a bright spot.

Intellectual inquiry

While we know the Romans undoubtedly possessed excellent organizational capability, they were, Smil says, “oddly incurious in most fields of intellectual inquiry, content to live largely off the Greek legacy.”

On energy

America’s high energy consumption makes “the two societies fundamentally incomparable. America is an unequaled example of a large modern economy that derives most of its power from the combustion of fossil fuels, supplemented by generation of primary electricity that converts it to useful tasks with high efficiency and that consumes energy at a very high per capita rate. In contrast, Rome’s energetic foundations, much like those of every ancient society, rested on the low-efficiency combustion of wood and the muscular exertions of people and animals,with overall per capita energy use being a small fraction of modern rates.” While easy availability of energy enables opportunity in America it constrained every aspect of Roman society.

Life expectancy

Romans were handcuffed with an extremely short life expectancy. While most Americans can expect to live until around 80 years old, the average Roman would have been happy to hit 35. While there were many causes for this, including poor medical care, war, poor sanitation, famine and disease, the greatest cost might have been a seemingly glacial pace in societal progress.

On city life

“For an ordinary Roman citizen,” Smil says, “the city was not a stunning cosmopolis of marble temples and showy fora; it was a squalid, fetid, unsanitary, noisy, and generous amalgam of people, animals, wastes, germs, diseases, and suffering.”

Economies

One of the differences between the American economy and the Roman one was structural. “In 2005 only about 1.5% of the US labor force was engaged in agriculture, about 11% in manufacturing, 9% in extraction, construction, and transportation, and the rest (78.5%) in a multitude of largely urban-based services. The Roman economy, like those of its ancient contemporaries, was fundamentally different. Classical writers left a record in which urban and military affairs take center place. … Given the low productivity of traditional agriculture, it was inevitable that most Romans spent their lives in fields and yards and along seashores, cultivating crops, threshing grains, pressing olives, producing wine, taking care of domestic animals, and harvesting marine foods.”

Conclusion

Smil does agree with the popular notion that the US is in a constant state of decline, having apexed long ago. “A closer critical look at US power reveals the country to be a weak, ineffective hegemony that has little in common with all those tiresome labels about “only remaining superpower” and “global domination” … while many of its indicators have increased in absolute terms, it has been in gradual relative retreat for more than two generations.”

Smil concludes that the most notable commonality between ancient Rome and modern America is the “vastly exaggerated perception of their respective powers, be they judged in terms of territorial extent, effective political influence, or convincing military superiority. If words are to retain their meaning then I am far from alone in concluding that America is not an empire, but I believe that even American’s undoubted global hegemony is of a very peculiar kind, much less effective and much more fragile than commonly thought.”

Polybius, the Greek historian of ancient Rome, should get the last word. At the outset of his great opus, The Histories, he remarked that those studying isolated histories are like a man who “after having looked at the dissevered limbs of an animal once alive and beautiful, fancies he has been as good as an eyewitness of the (living) creature itself in all its action and grace.”

If you’re interested in learning more, purchase Why America is not a New Rome.

Marshall McLuhan — The Man, The Mystery, The Life

Marshall McLuhan rocketed from an unknown academic to rockstar with the publication of Understanding Media: The Extensions of Man in 1964.

Understanding Media contained the simple prophecy that electronic media of the twentieth century—at the time consisting of telephone, radio, movies, television, but also including newer technologies like the Kindle, the Internet, the iPad—were breaking the traditional limitations of text over our thoughts and senses. Thanks to McLuhan’s ability to turn a phrase, Understanding Media is a work more talked about than read. At the core of the book is a phrase that most have heard: “The medium is the message.”

“What’s been forgotten,” argues author Nicolas Carr in The Shallows, “is that McLuhan was not just acknowledging, and celebrating, the transformative power of new communication technology. He was also sounding a warning about the threat that power poses—and the risk of being oblivious to that threat.

“The electric technology is within the gates,” McLuhan wrote, “and we are numb, deaf, blind and mute about its encounter with the Gutenberg technology, on and through which the American way of life was formed.”

“McLuhan understood,” Carr continues “that whenever a new medium comes along, people naturally get caught up in the information—the content—it carries.” With new mediums for communications come standard arguments. Enthusiasts praise the new content the technology allows, interpreting new methods as a way to reduce the friction of information sharing. This, they argue, is good for democracy. Skeptics, on the other hand, condemn the content and see any assault against existing mediums as an effort to dumb down our culture.

Both, however, miss McLuhan’s point that in the long run the content of a medium—be it a television, radio, the internet, or even a kindle—matters less than the medium itself in influencing us (whether we realize it or not).

Carr argues “As our window onto the world, and onto ourselves, a popular medium molds what we see and how we see it—eventually, if we use it enough, it changes who we are, as individuals” and collectively as a society.

“The effects of technology do not occur at the level of opinions or concepts,” wrote McLuhan. Rather they “alter patterns of perception steadily and without any resistance.”

In Understanding, McLuhan proffered “A new medium is never an addition to an old one, nor does it leave the old one in peace. It never ceases to oppress the older media until it finds new shapes and positions for them.” We see this today as newspapers transition to a digital world and how the medium—the internet—remakes the papers to fit its own standards. Not only have newspapers moved from physical to virtual but now they are hyperlinked, chunked, and embedded within noise. If he were alive (and healthy) McLuhan would argue these changes impact the way we understand the content.

McLuhan foresaw how all mass media would eventually be used for commercialization and consumerism:  “Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.”

* * *

Here is a clip from a 1968 CBC show featuring a popular debate between Norman Mailer and McLuhan.

Before you decide if McLuhan was a genius or crackpot you should know that McLuhan suffered from a few cerebral traumas—including multiple strokes. One stroke was so bad he was given his last rites. Only a few months before the debate with Mailer, a tumor the size of an apple was removed from his brain. McLuhan’s stardom—and some argue his mind—started fading shortly after this video.

* * *

Perhaps nothing demonstrates McLuhan’s brilliance as well as his craziness as this 1969 interview with playboy. In the excerpt below, McLuhan describes his vision of cloud computing—he was well ahead of his time, keep in mind we’re talking 1969. Carr sums up this interview brilliantly: As is typical of McLuhan, there’s brilliance here, but there’s also a whole lot of bad craziness. At least I hope it’s bad craziness.

MCLUHAN: Automation and cybernation can play an essential role in smoothing the transition to the new society.

PLAYBOY: How?

MCLUHAN: The computer can be used to direct a network of global thermostats to pattern life in ways that will optimize human awareness. Already, it’s technologically feasible to employ the computer to program societies in beneficial ways.

PLAYBOY: How do you program an entire society – beneficially or otherwise?

MCLUHAN: There’s nothing at all difficult about putting computers in the position where they will be able to conduct carefully orchestrated programing of the sensory life of whole populations. I know it sounds rather science-fictional, but if you understood cybernetics you’d realize we could do it today. The computer could program the media to determine the given messages a people should hear in terms of their over-all needs, creating a total media experience absorbed and patterned by all the senses. We could program five hours less of TV in Italy to promote the reading of newspapers during an election, or lay on an additional 25 hours of TV in Venezuela to cool down the tribal temperature raised by radio the preceding month. By such orchestrated interplay of all media, whole cultures could now be programed in order to improve and stabilize their emotional climate, just as we are beginning to learn how to maintain equilibrium among the world’s competing economies [ha! -Rough Type].

PLAYBOY: How does such environmental programing, however enlightened in intent, differ from Pavlovian brainwashing?

MCLUHAN: Your question reflects the usual panic of people confronted with unexplored technologies. I’m not saying such panic isn’t justified, or that such environmental programing couldn’t be brainwashing, or far worse – merely that such reactions are useless and distracting. Though I think the programing of societies could actually be conducted quite constructively and humanistically, I don’t want to be in the position of a Hiroshima physicist extolling the potential of nuclear energy in the first days of August 1945. But an understanding of media’s effects constitutes a civil defense against media fallout.

The alarm of so many people, however, at the prospect of corporate programing’s creation of a complete service environment on this planet is rather like fearing that a municipal lighting system will deprive the individual of the right to adjust each light to his own favorite level of intensity. Computer technology can – and doubtless will – program entire environments to fulfill the social needs and sensory preferences of communities and nations. The content of that programing, however, depends on the nature of future societies – but that is in our own hands.

PLAYBOY: Is it really in our hands – or, by seeming to advocate the use of computers to manipulate the future of entire cultures, aren’t you actually encouraging man to abdicate control over his destiny?

MCLUHAN: First of all – and I’m sorry to have to repeat this disclaimer – I’m not advocating anything; I’m merely probing and predicting trends. Even if I opposed them or thought them disastrous, I couldn’t stop them, so why waste my time lamenting? As Carlyle said of author Margaret Fuller after she remarked, “I accept the Universe”: “She’d better.” I see no possibility of a worldwide Luddite rebellion that will smash all machinery to bits, so we might as well sit back and see what is happening and what will happen to us in a cybernetic world. Resenting a new technology will not halt its progress.

The point to remember here is that whenever we use or perceive any technological extension of ourselves, we necessarily embrace it. Whenever we watch a TV screen or read a book, we are absorbing these extensions of ourselves into our individual system and experiencing an automatic “closure” or displacement of perception; we can’t escape this perpetual embrace of our daily technology unless we escape the technology itself and flee to a hermit’s cave. By consistently embracing all these technologies, we inevitably relate ourselves to them as servomechanisms. Thus, in order to make use of them at all, we must serve them as we do gods. The Eskimo is a servomechanism of his kayak, the cowboy of his horse, the businessman of his clock, the cyberneticist – and soon the entire world – of his computer. In other words, to the spoils belongs the victor …

The machine world reciprocates man’s devotion by rewarding him with goods and services and bounty. Man’s relationship with his machinery is thus inherently symbiotic. This has always been the case; it’s only in the electric age that man has an opportunity to recognize this marriage to his own technology. Electric technology is a qualitative extension of this age-old man-machine relationship; 20th Century man’s relationship to the computer is not by nature very different from prehistoric man’s relationship to his boat or to his wheel – with the important difference that all previous technologies or extensions of man were partial and fragmentary, whereas the electric is total and inclusive. Now man is beginning to wear his brain outside his skull and his nerves outside his skin; new technology breeds new man. A recent cartoon portrayed a little boy telling his nonplused mother: “I’m going to be a computer when I grow up.” Humor is often prophecy.

Understanding Media: The Extensions of Man is worth the read.

If you’re interested in learning more about McLuhan, check out the McLuhan project (http://www.abc.net.au/rn/mcluhan/). Read The Shallows: What the Internet is Doing to Our Brains to find out how the Internet is changing us.

Atul Gawande: Error in Medicine: What Have We Learned?

Over the past decade, it has become increasingly apparent that error in medicine is neither rare nor intractable. Traditionally, medicine has down- played error as a negligible factor in complications from medical intervention. But, as data on the magnitude of error aceumulate—and as the public learns more about them—medical leaders are taking the issue seriously. In particular, the recent publication of the Institute of Medicine report has resulted in an enormous increase in attention from the public, the government, and medical leadership.

Several books have been defining markers in this journey and highlight the issues that have emerged. Of particular note is Human Error in Medicine, edited by Marilyn Sue Bogner (2), published in 1994 (unfortunately, currently out of print) and written for those interested in error in medicine. Many of the thought leaders in the medical error field contributed chapters, and the contributions regarding human factors are especially strong. The book is a concise and clear introduction to the new paradigm of systems thinking in medical error.

Source

Dr. Atul Gawande, is the New York Times bestselling author of Better: A Surgeon’s Notes on Performance , Complications: A Surgeon’s Notes on an Imperfect Science, and The Checklist Manifesto: How to Get Things Right .