[Adaptive Thinking: Rationality in the Real World, Gerd Gigerenzer, Oxford University Press, 2000]
In the introduction to Part IV, Professor Gigerenzer tells us:
The “discovery” of cognitive illusions was not the first assault on human rationality. Sigmund Freud’s attack is probably the best known: According to him, the unconscious wishes and desires of the human id are a steady source of intrapsychical conflict that manifest iteslf in all kinds of irrational fears, beliefs, and behavior. But the cognitive-illusion assault is stronger than the psychoanalytic one. It does not need to invoke a conflict between rational judgment and unconscious wishes and desires to explain humans’ apparent irrationality: Judgment is itself fundamentally deficient. Homo Sapiens appears to be a misnomer. During the last few decades, cognitive illusions have become fodder for classroom demonstrations and textbooks. Isn’t it fun to show how dumb everyone else is, and after all, aren’t they? [page 237]
Alas, there are political implications even to the results of these dumb tests which show how dumb people are:
Given the message that ordinary citizens are unable to estimate uncertainties and risks, one might conclude that a government would be well advised to keep these nitwits out of important decisions regarding new technologies and environmental risks. [page 237]
Given the information that always comes out of archives after the ‘criminals’ are dead, it’s doubtful that even American leaders have been so concerned with the opinions of American citizens as formal American political processes might imply. And there are some prominent scientists, in stem cell research and elsewhere, who paint their opponents as being uniformly dumb people who believe men and dinosaurs lived at the same time. I’m no historian but I suspect this to be a time-honored way to neutralize opponents to get one’s own way. This is very important issue and one to be dealt with in other places.
Chapter 12, How to Make Cognitive Illusions Disappear, is an update of a talk given by Gigerenzer at Stanford University. It’s a presentation, sometimes from a slightly different angle, of material covered in earlier chapters of this book. For example, he argues, based on good evidence and reasoning, that many of the ‘errors’ human beings make in tests of decision-making under conditions of uncertainty, are actually errors by the test-takers. There is good reason to believe that human beings are pretty good at dealing with uncertainty when the information, experiential or given in a staged test, is presented on a basis more consistent with the information we actually find in the real world. One example would be the presentation of statistics on a frequency basis and not an abstract probabilistic basis (e.g., 1 out of 10 instead of 10%). There is also reason to believe that human beings don’t apply probabilistic reasoning in judging single events and he points out that most of the highly regarded theorists of statistics and probability also don’t believe that probabilistic reasoning is valid in judging single events. Only a small, but influential, group of theorists in cognitive psychology, economics, sociology, and maybe some other fields of social science assume we should reason in that abstract way.
Gigerenzer labels much of the statistical work which appears in the papers resulting from these cognitive illusion tests, and many other papers in psychology and medical research and other fields, to be statistical rituals, the result of training students in behavior much like obsessive hand-washing. I found that out once. I helped a friend on a small consulting assignment where she had to analyze surveys from training sessions put on around the U.S. by staff from a major medical school. The participants in those sessions were a non-homogeneous lot, ranging from young technicians to experienced doctors and nurses. Guess what? Some were bored and some felt taxed by the effort to keep up. The trainers wanted to discover what was going on by having their consultant run some specific tests, maybe t-tests — it was years ago. She asked me to help her run those tests and I returned the advice that they sit down and analyze the survey results along with the bios of the participants after maybe doing a few simpler statistic calcuations (such as percentiles summarizing the scale of boredom — the students had given answers on some scale, maybe 1 to 5). The medical school staff members weren’t impressed by my advice. Higher level thinking involves some sort of test of a null hypothesis and they weren’t about to let any mere math major suggest that such tests are useful — at best — when some very special circumstances are met. One of those circumstances is a homogeneous population or one stratified with a significant number of entities in each cell. Since then, some of my readings, including this book, Adaptive Minds has convinced me that those sorts of tests, involving null hypotheses and the like, are rarely useful, and never in any situation I’ll ever be in.
One question raised in this chapter is very interesting: Is the world stable enough for statistics to be generally useful in most of our (presumably single-event) decisions?
The answer is probably, “Yes,” but the underwriter working on catastrophe coverage for earthquake damage in Los Angeles probably reads a lot of reports and articles by geologists working in that area. Those working on auto collision rates probably keep up on the work of designers in Detroit and Tokyo as well as the data collected from the cars already designed and driven. This is to say that decision-makers should first become experts in finding and understanding useful information. It’s hard work to understand and quantify sources of risk or uncertainty.
Chapter 13, The Superego, the Ego, and the Id in Statistical Reasoning deals with the conquest of psychology by statistics, quoting Kendall, himself a statistician:
[S]tatisticians “have already overrun every branch of science with a rapidity of conquest rivalled only by Atilla, Mohammed, and the Colorado beetle”. (page 267)
Gigerenzer points out that there is one peculiarity about the appearance of statistics in psychology, by which he clearly means to include all work on human thought even if done by sociologists or economists or artificial intelligence researchers.
In psychology and in other social sciences, probability and statistics were typically not used to revise the understanding of our subject matter from a deterministic to some probabilistic view (as in physics, genetics, or evolutionary biology) but rather to mechanize the experimenters’ inferences — in particular, their inferences from data to hypothesis.
At this point, I think it worthwhile to note that the philosopher Stephen Toulmin once said that biologists speaking of randomness are talking about the complex events which occur when two independent systems interact. In fact, there is an easy way for a physicist to create a series of movements which are random in the sense of being unpredictable. Simply connect the bobs of two pendulums with different periods. You end up with motions which are fully determined but unpredictable in terms of current understandings of mathematics.
Understanding the reason for a system having ‘random’ aspects may well be even more important that quantifying that randomness. In evolutionary terms, you might have the genes of a living creature as one system and the environment as the other system. The results are unpredictable and form a stream of facts, a natural history. In psychological terms, you have the human mind and its environment. Again the results are unpredictable and form a stream of facts as a human being moves through his life, perceiving and thinking and responding — living. In both cases, you have systems which operate independently in many important ways but they overlap because of their evolutionary history. In both cases, you will probably get better results by untangling the various strands rather than coming up with some arbitrary measures of that mess.
Recently, there’s been a revolution in the understanding of random numbers that began with some speculations by the great Russian mathematician Kolmogorov and the American high student Chaitin (circa 1965 in both cases). As Marc Kac, the highly regarded measure theorist from Cornell, said in the early 1970s (I quote from memory):
We now know what a random number is — a fact.
So far as the number line goes, Chaitin completed the proof around 1990 and published it as an undergraduate textbook in computer science. What did he prove?:
Every number is random — in a measure theoretic sense. [My wording.]
Now I’ll quote from a 30 year-old memory. The professor in my introductory probability theory course told us that all of probability theory can be enfolded in a fully deterministic measure theory with no loss of content. I’ll not explain that further, leaving the interested reader the task of looking up Dr. Chaitin on the web. He and some of his early publishers have made some very good writings available for free download.
If you wish to come to some understanding of what’s going on, just ask yourself what the source of random numbers could be under the naive understanding of probabilities and statistics. Oddly enough, that naive understanding is nurtured by gambling examples though it’s now plausible to build a computer that could accurately predict a roulette wheel’s results with a bit of data from the past results of that wheel. Add a possible visual system and you have a system that could track the movement of every card with perfect accuracy no matter how well the deck is shuffled. Sure, you could come with tricks to bother the computer and the mechanical visual system, but the principle is clear.
If we assume — plausibly — that most scientists and educated citizens believe the world is a place of some randomness in that naive sense, or even much randomness, then those scientists and educated citizens are non-scientific, superstitious in the sense that modern thinkers define the word. Believers in conventional models of randomness might wave their hands or point to some sort of quantum magic if they were to try to explain where this randomness comes from. These explanations are less interesting than the Greek tales of the Fates weaving our futures.
And yet that unpredictability does exist, sometimes in a very nasty form when it’s human life or human health on the line. Good statistical and probabilistic methods are very useful and not superstitious in themselves. You could say, only partly tongue-in-cheek that probability theory is measure theory motivated by gambling examples. We deal with unpredictability by using good statistical techniques after we use our reasoning skills to properly understand the problem.
Gigerenzer tells us that it has often been hard in recent decades to publish in some of the major journals of psychology unless the paper presents the proper rituals of null-hypothesis testing and then tells us that Sir Ronald Fisher, developer of the null-hypothesis methods, felt that “significance testing was the most primitive type of argument in a hierarchy of possible statistical analyses,” and goes on to state the following points of this chapter:
- What has become institutionalized as inferentical statistics in psychology is not Fisherian statistics. It is an incoherent mishmash of some of Fisher’s ideas on the one hand and some of the ideas of Neyman and E.S. Person on the other…
- The instititutionalized hybrid carries the message that statistics is statistics is statistics, that is, that statistics is a single integrated structure that speaks with a single authoritative voice… [page 270]
I’ll leave it to the reader to follow up, if so interested, on the details of this problem. It’s important to realize that this idea that there is one statistical procedure to judge hypotheses is equivalent to the assumption that human knowledge will always fit nicely into this one shape. It turns out the gurus didn’t delude themselves on this issue. In fact, Gigerenzer goes on to acknowledge that Fisher in one camp, Neyman and Pearson in the other, went on to develop large toolboxes, a procedure which at least allows a multiple of shapes for human knowledge. It was the disciples who developed and mandated a single hybrid method for doing statistical analyses of experiments.
The last part of the chapter creates a tongue-in-cheek Freudian analogy of the “emotional tensions associated with the hybrid logic”. It’s amusing and might well make it easier to understand the relationships and tensions of these statistical conflicts which apparently came frighteningly close to defining what it means to do science within several fields, including psychology and branches of economics and also certain laboratory branches of biology.
Chapter 14, “Surrogates for Theories”, deals with the ways in which academics and others can pretend to think when they’re just playing various linguistic games — perhaps even believing they are thinking.
There is one obvious reason why surrogates for theories come to mind more quickly than real theories: demonstrating how a one-word explanation, a re-description, a dichotomy, or an exercise in data fitting “explains” a phenomenon demands less mental strain than developing a bold and precise theory. It takes imagination to conceive the idea that heat is caused by motion, but only little mental effort to propose that heat is caused by specific particles that have the propensity to be hot. [page 294]
If we are rational and moral creatures moving through a factual world, then it becomes incumbent upon us to perceive clearly and to think hard to understand this world and to find any useful or interesting or beautiful patterns in these facts. This is different from applying robotic statistics to data which was generated by some sort of magical randomness. In this context: ‘random’ is the ultimate one-word explanation.
Gigerenzer also provides a profound insight into a problem caused by the fragmentation of human knowledge.
Intellectual inbreeding can block the flow of positive metaphors from one discipline to another. Neither diciplines nor subdisciplines are natural categories. Interdisciplinary exchange has fueled the development of some of the most influential new metaphors and theories in the sciences… Territorial science, in contrast, blocks the flow of metaphors and the development of new theories. Distrust and disinterest in anything outside one’s subdiscipline supports surrogates for theory. [page 295]
I would suggest that this interdisciplinary borrowing should be across all human intellectual disciplines. That’s actually not a recommendation necessary for the more serious and more creative scientists who have borrowed freely from literature and philosophy and theology as well as from other sciences.