kahneman

i. Introduction/Synopsis

The adage ‘you are what you eat’ is no doubt literally true, but when it comes to getting at the heart of what we are it is certainly more accurate to say ‘you are what you think’; for our identity emerges out of the life of the mind, and our decisions and actions (including what we eat) is determined by our thoughts. An exploration of how we think therefore cuts to the core of what we are, and offers a clear path to gaining a better understanding of ourselves and why we behave as we do. In addition, while many of us are fairly happy with how our mind works, few of us would say that we could not afford to improve here at least in some respects; and therefore, an exploration of how we think also promises to point the way towards fruitful self-improvement (which stands to help us both in our personal and professional lives). While thinking about thinking was traditionally a speculative practice (embarked upon by philosophers and economists) it has recently received a more empirical treatment through the disciplines of psychology and neuroscience. It is from the latter angle that the Nobel Prize winning psychologist Daniel Kahneman approaches the subject in his new book Thinking, Fast and Slow.

As the title would suggest, Kahneman breaks down thinking into 2 modes or systems. Slow thinking is the system that we normally think of as thought in the strictest sense. It is deliberate and conscious, and we naturally feel as though we are in control of it (Kahneman refers to it as system 2). System 2 is in play when we actively consider what we want to have for dinner tonight, or when we choose what stocks to buy, or when we perform a mathematical calculation. System 1, by contrast, is automatic and unconscious, and hums along continuously in the background. It constantly surveys the environment, and processes the incoming stimuli with razor speed.

System 1 is informed by natural drives and instincts but is also capable of learning, which it does by way of association (that is, connecting up novel stimuli with known stimuli according to shared characteristics, contiguity in time and place, or causality). The system is designed to give us an impression of our environment as quickly as possible, thus allowing us to respond to it immediately, which is especially important in times of danger. In order to do so, system 1 relies on general rules and guidelines (called heuristics). These heuristics are primarily geared to help us in the moment and are tilted towards protecting us from danger, and in this respect they are mostly very useful. Still, mistakes can be made, and the system was specifically designed to work in the environment in which we evolved, which is quite different from our current one, so this adds to its errors.

Over and above this, the impressions that system 1 forms are also fed up to system 2. Indeed, whenever system 1 senses something out of the ordinary or dangerous, system 2 is automatically mobilized to help out with the situation. And even when system 2 is not mobilized specifically out of danger, it is constantly being fed suggestions by system 1. Now, while the impressions of system 1 are fairly effective in protecting us from moment to moment, they are much less effective in long-term planning; and therefore, they are much more problematic here. Of course, system 2 is capable of overriding the impressions of system 1, and of avoiding the errors. However, as Kahneman points out, system 2 is often completely unaware that it is being influenced (and misled) by system 1; and therefore, is not naturally well-equipped to catch the errors. Much of the book is spent exploring the activities and biases of system 1, in order to make us more aware of how this system works and how it influences (and often misleads) system 2.

This is only half the battle, though, for while system 2 may be naturally poorly equipped to catch the errors of system 1, it is also often poorly equipped to correct these errors. Indeed, Kahneman argues that system 2 is simply not a paragon of rationality (as is often assumed in economics), and could stand to use a good deal of help in this regard. The most glaring deficiency of system 2, according to Kahneman, is that it is naturally very poor with probabilities and statistics. Fortunately, system 2 can be trained to improve here, and this is another major concern of the book.

Here is Daniel Kahneman introducing and discussing his book with Charlie Rose:

What follows is an executive summary of Daniel Kahneman’s Thinking, Fast and Slow.

PART I: AN INTRODUCTION TO THINKING, WITH A FOCUS ON SYSTEM 1

Section 1: An Introduction to Thought, Fast and Slow

1. Thought, and Fast and Slow

We think of ourselves as the executive in control of our minds and bodies. The decision-maker with distinct beliefs who weighs alternative options, deliberates, and comes to choices based on our better judgment—which choices ultimately govern our behavior. This kind of thinking is what Kahneman refers to as System 2: “when we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices and decides what to think about and what to do” (loc. 402). According to Kahneman, though, System 2 is actually much more influenced than we tend to think by a second mode of thought that the author refers to as System 1 (loc. 402) (the terms System 1 and System 2 were originated by the psychologists Keith Stanovich and Richard West [loc. 396]).

Unlike System 2, System 1 is automatic and unconscious, and therefore often goes unnoticed. As Kahneman explains, “System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control” (loc. 399). System 1 is constantly monitoring the outside environment (as well as the inner mind) and forming quick and dirty impressions from this information (loc. 1640). Most crucially, System 1 is probing for information that is particularly important for the biological imperatives of survival and reproduction—meaning it is looking out for opportunities, and also (and especially) dangers. As Kahneman explains, “System 1 has been shaped by evolution to provide a continuous assessment of the main problems that an organism must solve to survive: How are things going? Is there a threat or a major opportunity? Should I approach or avoid?… situations are constantly evaluated as good or bad, requiring escape or permitting approach” (loc. 1651).

If nothing of note is detected, System 1 remains calm, and at relative ease, and goes on with business as usual. However, should something of importance come up, System 1 becomes strained and mobilizes System 2 to help out with the situation (loc. 474): “when System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment… You can… feel a surge of conscious attention whenever you are surprised. System 2 is activated when an event is detected that violates the model of the world that System 1 maintains” (loc. 474).

For the average person living in the modern world, true emergencies do not come up very often, and most situations do not call for an immediate reaction. However, in the environment in which we evolved (and in which System 1 evolved) this was far from the case, and we have retained this system from that time. As Kahneman explains, when it comes to the survival questions mentioned above, “the questions are perhaps less urgent for a human in a city environment than for a gazelle on the savannah, but we have inherited the neural mechanisms that evolved to provide ongoing assessments of threat level, and they have not been turned off” (loc. 1650). Therefore, while we may not need the quick and dirty impressions that System 1 provides as much as we did in the environment in which we evolved, our brains continue to churn out these impressions just as frequently as ever.

Now, only so much information is available at any given moment, and yet System 1 is expected to continually come up with as accurate an impression as possible as quickly as possible. And so, in order to do this, System 1 must necessarily take short cuts and make educated guesses. These short cuts and educated guesses may be mistaken occasionally, and therefore, it is best for System 1 to err on the side of caution whenever possible. We will now take a closer look at the inner workings of System 1.

Section 2: System 1 Under the Microscope

2. Learning by Association and the Priming Effect

a. Association

It was mentioned above that System 1 calls upon System 2 when the model of the world that it maintains is violated in some way. The model of the world that System 1 maintains is formed out of innate faculties, and its content is also partially innate. For instance, “we are born prepared to perceive the world around us, recognize objects, orient attention, avoid losses, and fear spiders” (loc. 417).

However, the model of the world that we maintain is mostly created and continually upgraded by way of registering associations between phenomenon and events that are connected in one of a few different ways—either by “resemblance, contiguity in time and place, [or] causality” (loc. 932). For instance, a lime may become associated with the color green, or the general idea of fruit; a rat may become associated with sewers, or otherwise dank places; or a cold may become associated with a virus (loc. 937).

“As these links are formed and strengthened,” Kahneman explains, “the pattern of associated ideas comes to represent the structure of events in your life, and it determines your interpretation of the present as well as your expectations of the future” (loc. 1312). In other words, the ideas in your mind become so many nodes in an interconnected network, the whole of which makes up the model that System 1 uses to represent and understand the world.

b. The Priming Effect

In action, whenever an idea is triggered in the brain (either by direct or indirect experience), the ideas that are associated with it are also ‘primed’ within the brain (a phenomenon known as the ‘priming effect’ [loc. 950]). The priming effect helps System 1 form a quick impression of any given situation, and to determine whether it calls for added attention or not.

Now, the priming effect is an unconscious process; and, as such, mostly goes undetected, but its effects can be seen in experiments. For example, “if you have recently seen or heard the word EAT, you are temporarily more likely to complete the word fragment SO_P as SOUP than as SOAP. The opposite would happen, of course, if you had just seen WASH… EAT primes the idea of SOUP, and… WASH primes SOAP” (loc. 950).

Interestingly, the priming effect can also influence behavior. For instance, in one experiment student subjects were asked to form 4-word sentences from a group of 5 words, wherein the 5 words were either neutral (such as ‘finds he it yellow instantly’ [loc. 960]) or loaded with terms associated with old-age (such as Florida, forgetful, bald, gray, wrinkle’ [loc. 960]). The subjects were then asked to walk down a hall to another room (a trip that was clandestinely timed by the researchers) (loc. 964). Lo and behold, the subjects who formed sentences using words related to old-age walked slower than those that did not! (loc. 964). As Kahneman explains, “the ‘Florida effect’ involves two stages of priming. First, the set of words primes thoughts of old age, though the word old is never mentioned; second, these thoughts prime a behavior, walking slowly, which is associated with old age. All this happens without any awareness. When they were questioned afterward, none of the students reported noticing that the words had had a common theme, and they all insisted that nothing they did after the first experiment could have been influenced by the words they had encountered. The idea of old age had not come into their conscious awareness, but their actions changed nevertheless” (loc. 970).

In a somewhat spookier example, researchers staged an experiment in a workplace kitchen wherein employees would routinely make themselves tea or coffee in exchange for a fee that they would drop in an ‘honesty box’ (loc. 1042). In the first stage of the experiment, the researchers planted a picture of a flowerpot in the room, while in the second stage of the experiment they replaced it with a picture of a pair of eyes (loc. 1042). The two pictures were then alternated back and forth each week, for a period of ten weeks (loc. 1044). Finally, the researchers compared how much money was left in the honesty box across the 2 situations in the experiment. Here’s Kahneman to explain the results: “no one commented on the new decorations, but the contributions to the honesty box changed significantly… On average, the users of the kitchen contributed almost three times as much in ‘eye weeks’ as they did in ‘flower weeks.’ Evidently, a purely symbolic reminder of being watched prodded people into improved behavior. As we expect at this point, the effect occurs without any awareness” (loc. 1053). This is a very eye-opening example of how System 1 can influence System 2, and also hints at the frightening ways that System 1 might be exploited.

Here is a short clip about the priming effect in action:

3. Context and Causality

a. Context

We have now seen how System 1 receives information and holds it up against its model of the world, and how it does this in order to prepare it for what to expect, and to mobilize System 2 should it come across anything peculiar or out of the ordinary (especially if this something represents an opportunity, and extra-especially if it represents a danger). But System 1 also makes an active effort to assign meaning to events and phenomenon, and to make judgments about the world (including the people in it). Again, though, because System 1 is tasked with doing this very quickly, it must rely on short-cuts and educated guesses.

To begin with, System is constantly striving to come up with a coherent story out of the limited and sometimes fragmentary information that it receives. In order to help it in this cause, System 1 refers to the context in which information is presented. However, sometimes this context is missing. In these cases System 1 simply assumes the most likely context and takes a guess. For instance, consider the following sentence, and permit yourself a mental image: ‘Ann approached the bank.’ If you are like most city dwellers, you conjured up an image of a lady walking towards a building. But as Kahneman points out, “the sentence is ambiguous. If an earlier sentence had been ‘They were floating gently down the river,’ you would have imagined an altogether different scene. When you have just been thinking of a river, the word bank is not associated with money. In the absence of an explicit context, System 1 generated a likely context on its own. We know that it is system 1 because you were not aware of the possibility of another interpretation” (loc. 1462).

b. Causality

Another strategy that System 1 uses in order to make sense of the information that it receives is to look for a cause (and especially an intentional cause) that explains the events that are unfolding before it.

System 1’s proclivity to come up with a causal story to explain events comes up time and again in the book (and we will return to it often below). This proclivity of ours is not something that is learned, but is rather innate. This has been shown in experiments involving infants as young as 6 months old. In these experiments, infants are exposed to common cause-effect scenarios (such as a square running into a circle). When these cause-effect scenarios are manipulated to upset the normal causal chain (such as the circle being unaffected by the square), the infants display added attention, indicating that they are expecting something else and are surprised by the result (loc. 1406). As Kahneman explains, “we are evidently ready from birth to have impressions of causality, which do not depend on reasoning about patterns of causation. They are products of System 1” (loc. 1409).

System 1 is also primed to look for intentional causes behind phenomenon, and naturally distinguishes between mechanical causes and agent-driven ones (loc. 1414). For example, when shown scenarios of abstract objects behaving in ways reminiscent of human actors, people will naturally interpret the scenario in terms of human intentions and emotions (loc. 1410). As Kahneman explains, “the perception of intention and emotion is irresistible; only people afflicted by autism do not experience it” (loc. 1414). And again, this predisposition is present from a very young age. For example, “infants under one year old identify bullies and victims, and expect a pursuer to follow the most direct path in attempting to catch whatever it is chasing” (loc. 1416).

While System 1 is primed to search out and identify both mechanical and intentional causes, it is especially sensitive and alert when it comes to the intentional variety. For example, consider the following scenario: “After spending a day exploring beautiful sights in the crowded streets of New York, Jane discovered that her wallet was missing” (loc. 1390). You should not be surprised if the first thing that jumped into your mind after reading this story was something like ‘pickpocket’. In one experiment, “when people who had read this brief story (along with many others) were given a surprise recall test, the word pickpocket was more strongly associated with the story than the word sights, even though the latter was actually in the sentence while the former was not. The rules of associative coherence tell us what happened. The event of a lost wallet could evoke many different causes: the wallet slipped out of a pocket, was left in the restaurant, etc. However, when the ideas of lost wallet, New York, and crowds are juxtaposed, they jointly evoke the explanation that a pickpocket caused the loss” (loc. 1395).

The reason why this causal radar of ours has evolved is fairly easy to see. To begin with, cause and effect adheres in nature; as such, it is a good general strategy to assume that a specific cause underlies any given event, and also to seek out and identify it—in order that we may better prepare for and react to it (loc. 2079) (as we shall soon see, though, many phenomenon are better explained in terms of randomness, or blind luck; and therefore, this assumption can sometimes lead us into errors).

Presuming intentional causation is also a good bet because we are constantly dealing with other agents. What’s more, some of our deepest threats come from these intentional agents (more so other humans nowadays, but in our evolutionary past animal predators as well) (loc. 2079). So, for example, assuming that a rustling in the bushes (or in an alley) is an intentional agent and not just the wind is more likely to preserve life and limb than the alternative. All of this goes to show that assuming mechanical and/or intentional causation will most often be correct, and when it is not the error will be made on the side of caution. This is very much in keeping with the function and M.O. of System 1.

4. Judging and Evaluating on Limited Evidence: WYSIATI and Substitution

a. WYSIATI

In addition to making sense out of the events unfolding before it, System 1 is also involved in judging and evaluating the phenomenon it experiences. Here again, System 1 must resort to shortcuts and educated guesses to render its impressions as quickly as possible. Essentially, what system 1 does is that it jumps to conclusions based on what limited information it has access to; which information (given that it lives very much in the moment) is confined to what is directly in front of it, and/or most readily comes to mind (loc. 1466).

System 1’s tendency to consider only the information that is directly at hand is so pervasive—and Kahneman refers to it so often—that the author uses an (admittedly cumbersome) abbreviation to represent it: WYSIATI: what you see is all there is (loc. 1588). For an example of WYSIATI, consider the following: “Will Mindik be a good leader? She is intelligent and strong…” (loc. 1575). As Kahneman points out, your first impression here is probably to answer the question in the affirmative (loc. 1575). After all, intelligence and strength both seem to be important qualities to have in a good leader, and “this is the best story that can be constructed from two adjectives” (loc. 1580). In other words, given the information we have, this is the most accurate conclusion we can come up with.

But what we have not done here is to analyze the question in any great depth. That is, we have not bothered to ask “‘what would I need to know before I formed an opinion about the quality of someone’s leadership?’” (loc. 1577). This is simply not System 1’s department.

Now, it may very well be valuable to have access to a quick and dirty first impression based on the information that is available in any given situation. But we can also easily see how these quick and dirty first impressions can mislead of us if we rely on them in making our judgments. (We may be getting ahead of ourselves a bit here, but it is important to point this out). For example, if we had bothered to slow down and ask the aforementioned question in this case, we might have noticed that intelligence and strength are not the most important characteristics that go into making a good leader, and that they may in fact backfire if they are not accompanied by other traits. This is made painfully clear as Kahneman directs our attention back to the original sentence and asks “what if the next two adjectives were corrupt and cruel?” (loc. 1577). In this context, intelligence and strength become downright dangerous! (There will be much more on WYSIATI below).

b. Substitution

In addition to WYSIATI, System 1 also jumps to judgments and evaluations in several other ways. For example, when we are presented with a question that we do not know the answer to, System 1 will simply get to work and answer a related but much easier question, and then offer up this answer to System 2 as the solution to the more difficult question (often without System 2 even recognizing what has happened) (loc. 1789). Kahneman refers to this System 1 sleight of hand as substitution (loc. 1789).

For example, if you are asked your opinion about the choice-worthiness of buying Ford shares, and you do not have much knowledge about said shares, your System 1 may think about the related question of how you feel about Ford automobiles, and then offer up the answer to this question to System 2 (loc. 274). Alternatively, if you are shown the picture of a political candidate and are asked to estimate how far you think she will go in politics, and you do not know anything about the candidate, your System 1 may think about the related question of how competent and self-assured she looks (loc. 1816).

WYSIATI and substitution can be combined in interesting (and somewhat comical) ways. For example, in one experiment student subjects were presented with 2 questions: 1) How happy are you these days?; 2) How many dates did you have last month? (loc. 1862). When researchers asked the subjects these questions in this order they found that there was no correlation between how many dates the students had had, and how happy they rated themselves (loc. 1862). As Kahneman notes, “evidently, dating was not what came first to the students’ minds when they were asked to assess their happiness” (loc. 1864).

However, when the researchers flipped the order of these questions the correlation changed. As the author explains, “in this sequence, the correlation between the number of dates and reported happiness was about as high as correlations between psychological measures can get” (loc. 1869). Essentially, what had happened is that the date question put the students in mind of some information (and an emotional response) that has some bearing on their level of happiness, and then this information (and emotional state) dominated their thinking when they were asked the more general question about their overall level of happiness (whereas otherwise it represented only a small [and indeed negligible] factor) (loc. 1873).

In a related experiment, researchers asked student subjects to photocopy a sheet of paper at a nearby photocopier, and then confronted them with a questionnaire about life satisfaction. For half of the subjects the researchers planted a dime on the photocopy machine (loc. 7341). Incredibly, “the minor lucky incident caused a marked improvement in subjects’ reported satisfaction with their life as a whole!” (loc. 7343).

5. When System 1 Judges People: Stereotypes, First Impressions, and the Halo Effect

a. Stereotypes

When it comes to evaluating people, it should come as no surprise that System 1 is inclined to judge people according to stereotypes (loc. 2652-2724, 2746-53, 3053-68). It is well understood that stereotypes are often wrong, but when no other information is at hand they offer a better-than-chance way of making a quick first impression about someone (loc. 2753), and this edge is enough for the practice to be of value to System 1 (loc. 2753).

b. First Impressions

And speaking of first impressions, it should also come as no surprise that System 1 is particularly susceptible to falling for them. The importance of first impressions has been repeated to the point of cliché, but it is truly difficult to overstate the power of this phenomenon. To illustrate this point, consider the following experiment. The psychologist Solomon Asch asked subjects to say what they thought of two hypothetical characters: Alan and Ben. Here are the descriptions of the two: “Alan: intelligent—industrious—impulsive—critical—stubborn—envious; Ben: envious—stubborn—critical—impulsive—industrious—intelligent” (loc. 1513). As you will have noticed, the descriptors for the two characters are identical; the only thing that is changed is their order. And yet subjects consistently rate Alan much more favourably than Ben (loc. 1513). The fact is that our first impression is formed immediately, and then subsequent information is interpreted in light of this first impression. As Kahneman explains it, “the initial traits in the list change the very meaning of the traits that appear later. The stubbornness of an intelligent person is seen as likely to be justified and may actually evoke respect, but intelligence in an envious and stubborn person makes him more dangerous” (loc. 1516).

b. The Halo Effect

System 1 also takes a third short-cut in evaluating people, and this short-cut is known as the ‘halo effect’. As Kahneman explains, the halo effect is “the tendency to like (or dislike) everything about a person—including things you have not observed” (loc. 1497). Essentially, System 1 tends to evaluate someone on one (or a handful) of traits, and then simply extends this evaluation to other traits. For example, “you meet a woman named Joan at a party and find her personable and easy to talk to. Now her name comes up as someone who could be asked to contribute to a charity. What do you know about Joan’s generosity? The correct answer is that you know virtually nothing… But you like Joan and you will retrieve the feeling of liking her when you think of her. You also like generosity and generous people. By association, you are now predisposed to believe that Joan is generous. And now that you believe she is generous, you probably like Joan even better than you did earlier, because you have added generosity to her pleasant attributes” (loc. 1507).

6. The Interaction Between System 1 and System 2

Gaining a small edge in the moment is an appropriate way to think of what System 1 does in general; and in this it does very well. As Kahneman explains, “System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate” (loc. 484).

However, aside from being made available in the moment, the impressions that System 1 generates are also passed up to System 2 for consideration in deliberation and long-term planning (loc. 469). Now, these impressions may also be of some value here, but because they sacrifice precision for speed there is also a very real chance that they may cause misjudgments and errors (as has already been hinted at).

Thankfully, System 2 is capable of evaluating the impressions and intuitions that it receives from System 1, and can therefore override these impressions and intuitions where appropriate. The problem, though, is that even at the best of times System 2 can be fairly lazy. As Kahneman explains, “the defining feature of System 2, in this story, is that its operations are effortful, and one of its main characteristics is laziness, a reluctance to invest more effort than is strictly necessary” (loc. 580). This is all part of biological economy. Effort (including mental effort) takes up energy (loc. 760), which is a precious biological resource; so when an organism can get away without using it up, it will. This phenomenon even has a catchy name: the law of least effort; and, as Kahneman explains, it “is built deep into our nature” (loc. 628).

For a nice example of the law of least effort, consider the following problem: “A bat and ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost?” (loc. 787). Chances are a number popped into your head, and chances are that number was ‘10 cents’. If you accepted that number and answered ‘10 cents,’ congratulations! You just followed the law of least effort. If you violated the law, that means you used your System 2 to check your answer. This might have gone something like the following: “if the ball costs 10 cents, then the total cost will be $1.20 (10 cents for the ball and $1.10 for the bat), not $1.10” (loc. 791). This isn’t right, so you used System 2 to perform the math a little more carefully and you found that “the correct answer is 5 cents” (loc. 791).

Virtually everyone is capable of answering the bat and ball problem correctly if they only violate the law of least effort and activate System 2, but most people don’t. As Kahneman explains, “many thousands of university students have answered the bat-and-ball puzzle, and the results are shocking. More than 50% of students at Harvard, MIT, and Princeton gave the intuitive—incorrect—answer. At less selective universities, the rate of demonstrable failure to check was in excess of 80%” (loc. 804).

What’s worse is that we are even less likely to activate System 2 when we are tired or hungry or preoccupied in some way (loc. 721-30, 742). This makes intuitive sense, of course, but it has also been demonstrated in both lab and field experiments (loc. 733-52). To take just one (very poignant) example, a study performed in Israel found that the rate at which parole judges grant paroles drops precipitously the further away the judges are removed from their last meal or break time: “the authors of the study plotted the proportion of approved requests against the time since the last food break. The proportion spikes after each meal, when about 65% of requests are granted. During the two hours or so until the judges’ next feeding, the approval rate drops steadily, to about zero just before the meal” (loc. 779). As Kahneman explains, “tired and hungry judges tend to fall back on the easier default position of denying requests for parole. Both fatigue and hunger probably play a role” (loc. 781).

The good news is that we can overcome our lethargy when we are motivated to do so (loc. 754) (and it also helps to make sure we are well rested and getting enough to eat—as we have just seen). Part of this motivation may come from understanding just how badly System 1 can mislead us in certain circumstances (and the errors that it draws us into), so the remainder of the article will be dedicated to doing just this.

PART II: THE ERRORS OF SYSTEM 1

7. An Error of Association and Priming: The Anchoring Effect

As we have seen above, learning by association allows the brain to create a model of the world that helps it make sense of incoming information. In real time, phenomenon and events prime the brain with associated ideas, and the ideas that are primed help System 1 react to the situation.

However, priming can also mislead us in certain circumstances. Consider the following experiment. Kahneman and his long time friend and collaborator Amos Tversky ran a test whereby they asked student subjects to estimate what percentage of African countries were part of the UN (loc. 2150). Before they did so, however, they had the students spin a wheel of fortune that was marked from 0 to 100, but was rigged to fall either on 10 or 65 (loc. 2146). One would hope that a random number spun on a wheel of fortune would not influence estimates involving a state of affairs in the world, but this is exactly what happened. As Kahneman explains, “the average estimates of those who saw 10 and 65 were 25% and 45% respectively” (loc. 2152).

The results of this experiment may seem somewhat bizarre (if not a little scary), but the phenomenon at play is by no means unusual. As the author notes, “the phenomenon we were studying is so common and so important in the everyday world that you should know its name: it is an anchoring effect. It occurs when people consider a particular value for an unknown quantity before estimating that quantity” (loc. 2154). Essentially what happens is that the original number is briefly anchored upon by the subconscious, and it distorts any thinking that comes in the aftermath.

In certain cases the anchoring effort is also amplified by System 1’s tendency to take things at face value (its gullibility, essentially [loc. 2216]), and also its tendency to fall prey to suggestibility (which is when “someone causes us to see, hear or feel something by merely bringing it to mind” [loc. 2206]). For a real-world example of this consider the following experiment: “a few years ago, supermarket shoppers in Sioux City, Iowa, encountered a sales promotion for Campbell’s soup at about 10% off the regular price. On some days, a sign on the shelf said LIMIT OF 12 PER PERSON. On other days the sign said NO LIMIT PER PERSON. Shoppers purchased an average of 7 cans when the limit was in force, twice as many as they bought when the limit was removed” (loc. 2289).

Anchoring can also come into play in such things as setting a salary, or establishing the price of real estate. In one experiment, researchers had real estate agents assess the value of a house that was up for sale (loc. 2245). Half of the agents were shown an asking price that was well above the listed price, while the other half of the agents were shown an asking price that was well below the listed price (loc. 2245). As Kahneman explains, the real estate agents “insisted that the listing price had no effect on their responses, but they were wrong: the anchoring effect was 41%” (loc. 2249) (the anchoring effect is arrived at by taking the numerical difference between the two estimates and dividing it by the numerical difference between the two anchors [loc. 2240]. To put this in perspective, “the anchoring measure would be 100% for people who slavishly adopt the anchor as an estimate, and zero for people who are able to ignore the anchor altogether” [loc. 2242]).

The anchoring effect can be extremely difficult to shake. However, it has been shown that there are strategies that do help. For example, one experiment had subjects deliberately come up with arguments against the anchor at play (loc. 2301); and, as Kahneman explains, “the instruction to activate System 2 was successful” (loc. 2301). The author concludes that “in general, a strategy of deliberately ‘thinking the opposite’ may be a good defense against anchoring effects, because it negates the biased recruitment of thoughts that produces these effects” (loc. 2303).

8. The Framing Effect

A phenomenon in many ways related to the anchoring effect is the framing effect. The framing effect refers to the fact that the way a problem or question is presented can influence our response (loc. 6709). The way this works is that the frame directs our awareness to a particular aspect of the problem or question (which in turn triggers particular associations [loc. 6701]), and this in turn influences our response. For example, if you are asked what you think about Italy winning the 1998 world cup, you will come up with a far different response than if you asked what you think about France losing the 1998 world cup, even though both questions ask for your response to the very same event (loc. 6701).

An excellent illustration of the framing effect comes out of an experiment performed with doctors at the Harvard Medical School (loc. 6765). To begin with, the doctors were shown statistics comparing the outcomes of surgery and radiation on treating lung cancer (loc. 6769). As Kahneman explains, “the five-year survival rates clearly favor surgery, but in the short term surgery is riskier than radiation” (loc. 6769). When it came to the short term risks of surgery, though, half of the doctors were shown stats that referred to the survival rate (which is 90% after one month), while the other half of the doctors were shown stats that referred to the mortality rate (which is 10% after one month) (loc. 6772).

Here’s Kahneman with the results: “you already know the results: surgery was much more popular in the former frame (84% of physicians chose it) than in the latter (where 50% favored radiation). The logical equivalence of the two descriptions is transparent, and a reality-bound decision maker would make the same choice regardless of which version she saw. But System 1, as we have gotten to know it, is rarely indifferent to emotional words: mortality is bad, survival is good, and 90% survival sounds encouraging whereas 10% mortality is frightening” (loc. 6778).

As mentioned, framing works partly by way of training the mind towards a particular bit of information or set of circumstances that is then given priority by System 1. As such, framing is partly explicable in terms of the WYSIATI effect (what you see is all there is) (loc. 1612-21). WYSIATI has a large role to play in the errors induced by the influence of System 1, and we will now take a closer look at some of these errors.

9. File Under WYSIATI

a. WYSIATI and Confidence

WYSIATI can be seen in its purest form when we are asked to assess an issue after being given only one side of the story. For example, in experiments where subjects are asked to judge a court case after hearing only 1 of the 2 lawyers’ arguments, their judgments are heavily skewed towards the side of the argument that they have heard (loc. 1607). At first glance this may not seem so surprising, but we must consider that “the participants were fully aware of the setup, and those who heard only one side could easily have generated the argument for the other side” (loc. 1605). This they did not do, though; rather, they were predominantly persuaded by the side of the argument that they were exposed to (loc. 1607).

The subjects who were presented with only 1 side of the argument were not only persuaded by it, but were also more confident in their judgements than those who were presented with both sides (loc. 1607). According to Kahneman, this has to do with the fact that the absence of counter-arguments allowed the half-informed subjects to generate a much more coherent story out of the situation (loc. 1608). For System 1, coherence is like candy: it constantly seeks it out, and is satisfied when it finds it (loc. 289, 1281). This proves to be the case because the presence of coherence is the simplest and most basic indicator of a story’s truth (and the easiest and quickest to establish) (loc. 1611). What’s more, as Kahneman explains, “much of the time, the coherent story we put together is close enough to reality to support reasonable action” (loc. 1612). And this is really all that System 1 is looking for. As a result, System 1 puts its faith in the coherence of the story that it creates, rather than the completeness of the information that it has access to (loc. 1608).

Not only can a more coherent story be created in the absence of conflicting information, but a more coherent story can also often be created with less information period. Indeed, as Kahneman points out, “you will often find that knowing little makes it easier to fit everything you know into a coherent pattern” (loc. 1611). The result of this is that we are (paradoxically) often more intuitively convinced of something the less information we have access to.

This perverse phenomenon has been verified in a very clever group of lab experiments. In one of these experiments, researchers asked subjects to list instances in which they behaved assertively, and then to evaluate how assertive they are in general (loc. 2397). Half of the subjects were asked to come up with 6 instances of being assertive, while the other half were asked to come up with 12 (loc. 2397). The researchers hypothesized that the subjects who came up with 6 instances of assertiveness would rate themselves as more assertive than those who came up with 12; for while the latter subjects would have generated more instances of their own assertiveness, they would have inevitably found it increasingly difficult to come up with these instances (loc. 2407). This difficulty, the researchers thought, would conflict with their seeing themselves as assertive, and would thereby lead them to downgrade their overall level of assertiveness.

Sure enough, “people who had just listed twelve instances rated themselves as less assertive than people who had listed only six. Furthermore,” Kahneman continues, “participants who had been asked to list twelve cases in which they had not behaved assertively ended up thinking of themselves as quite assertive! If you cannot easily come up with instances of meek behavior, you are likely to conclude that you are not meek at all. Self-ratings were dominated by the ease with which examples had come to mind. The experience of fluent retrieval of instances trumped the number retrieved” (loc. 2412).

This experiment was subsequently rerun using many types of similar scenarios, and the results were always the same. I’ll mention just one particularly interesting one here: it was found that people “are less confident in a choice when they are asked to produce more arguments to support it” (loc. 2422).

b. WYSIATI and Estimations

The ease with which we can think of examples also influences us (via System 1) in such things as estimating the frequency of a category, and the potential danger of a threat. With regards to the former, as Kahneman explains, “instances of the class will be retrieved from memory, and if retrieval is easy and fluent, the category will be judged to be large” (loc. 2350). For instance, if we are asked to judge the frequency of celebrity divorces or political sex scandals, our estimates will reflect the fact that we can easily think of examples, as these cases often receive a good deal of media attention (loc. 2367). We “are therefore likely to exaggerate the frequency of both Hollywood divorces and political sex scandals” (loc. 2369).

In addition to this, personal experience also influences us here. For instance, we can more easily think of examples of ourselves performing housework than our spouses, and so we are likely to overestimate how much we contribute in this regard. As confirmation of this, a study that asked spouses to each estimate how much they contributed to housework found that, as expected, “the self-assessed contributions added up to more than 100%” (loc. 2381). Nor is this effect entirely limited to self-serving bias. Indeed, it was found that “spouses also overestimated their contribution to causing quarrels, although to a smaller extent than their contributions to more desirable outcomes” (loc. 2386).

The novelty, vividness and poignancy of an example also influences how easily it comes to mind, which in turn affects our thinking. What’s more, novel and poignant scenarios often receive more media coverage (because they are exceptionally interesting to us [loc. 2513]), thus exacerbating the effect (loc. 2510). This is especially apparent when it comes to our estimating the danger of a potential threat. Consider the following examples: “Tornadoes were seen as more frequent killers than asthma, although the latter cause 20 times more deaths; Strokes cause almost twice as many deaths as all accidents combined, but 80% of respondents judged accidental death to be more likely; Death by accidents was judged to be more than 300 times more likely than death by diabetes, but the true ratio is 1:4” (loc. 2510). For Kahneman, the message is clear: “estimates of causes of death are warped by media coverage. The coverage is itself biased toward novelty and poignancy” (loc. 2510).

The bias in our estimates of potential threats is particularly pernicious because it comes to influence (and indeed distort) public policy. The legal scholar Cass Sunstein has studied this phenomenon and has come up with a label to describe it: the ‘availability cascade’ (loc. 2584). As Kahneman explains, “an availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action” (loc. 2589). Sunstein cites the Love Canal affair and the Alar scare as classic examples of the availability cascade (loc. 2599).

In today’s world, though, the greatest example of the availability cascade comes from terrorist threats. As Kahneman explains, “the number of casualties from terror attacks is very small relative to other causes of death,” and yet “gruesome images, endlessly repeated in the media, cause everyone to be on edge” (loc. 2630).

c. The Two Selves

One other way that the quirks of recall can influence us is in how we think of our past experiences (and, by extension, our overall happiness). Specifically, the remembering self (that remembers our experiences) is often at odds with the experiencing self (that actually experiences them). The discrepancy between the two is so great that Kahneman insists we would best think of ourselves as two selves. Here is Kahneman in a TED talk to explain the two selves:

d. WYSIATI and Optimism

WYSIATI can also be combined with optimism to mislead us in many interesting ways (especially when it comes to our personal projects). Kahneman swallows his pride and offers up a particularly entertaining example from his own life here. The example has to do with an episode in Kahneman’s life where he was engaged in a collaborative project to produce a curriculum “to teach judgment and decision making in high schools” (loc. 4505). About a year into the project, the team had succeeded in creating “a detailed outline of the syllabus, had written a couple of chapters, and had run a few sample lessons in the classroom” (loc. 4508). At this point, Kahneman asked his colleagues to estimate how long it would take them to complete the curriculum. The estimates “were narrowly centered around two years; the low end was one and a half, the high end two and a half years” (loc. 4515).

Kahneman then remembered that one of the team members (a man named Seymour Fox) had been close to projects like this before, and so asked him to think about the time frame that these projects generally required. With some embarrassment, Seymour recalled that as many as 40% of the projects he could think of were abandoned before ever being completed, and that the projects that were completed took anywhere from seven to ten years (loc. 4524). Now, these numbers did not square well with the team’s own estimates—which, after all, were based on their first hand knowledge of just how much progress they had made in the early stages of the project (loc. 4509, 4539), and how motivated they all were (loc. 4557). So, the team basically ignored the warning sign they had just stared in the face, and trooped on just the same (loc. 4542).

You can probably guess how things proceeded from here. As Kahneman explains, “the book was eventually completed eight(!) years later” (loc. 4542). To add insult to injury, “the initial enthusiasm for the idea in the Ministry of Education had waned by the time the text was delivered and it was never used” (loc. 4544).

In retrospect, Kahneman believes that he and his team fell prey to a classic bout of WYSIATI mixed with optimism. The WYSIATI was provided by the early progress that the team had made, and their motivation at the beginning of the project. The optimism led them to make their estimates based on a best case scenario, that ignored “what Donald Rumsfeld famously called the ‘unknown unknowns’” (loc. 4559). Both of these errors could have been avoided if the team had taken seriously how long it had taken other teams to complete the same type of project—but they did not. In the lingo that Kahneman uses, the team was misled by the ‘inside view,’ whereas the ‘outside view’ is what they should have relied upon (loc. 4547-76).

According to Kahneman, it is just this type of reliance on the ‘inside view’ that plagues all too many projects, from public works to small business ventures. A fact that at least partly explains why government projects often take far longer and cost far more than the original estimates (loc. 4605-13), and also why 65% of small businesses go under in less than 5 years (loc. 4718).

For Kahneman, optimism and overconfidence are a significant cause of many of the ills in society. Nevertheless, he also posits they are not all bad, as both also contribute to perseverance, which can often lead to good consequences. Here is Kahneman with a few final thoughts on optimism and confidence (and a preview of a phenomenon that we will look at in greater detail below: loss aversion):

10. Causal Errors and Statistical Illiteracy

a. Causal Errors

i. Mistaking Stats for Causes

As we have seen above, System 1 is primed to look for (and identify) causes in events and phenomenon. This is a strategy that mostly serves us very well, for causes adhere in nature, and identifying them often helps us determine how to react in particular circumstances. However, the causal bias can sometimes lead us astray, for we are prone to assign a cause (and especially an intentional cause) to something even where luck or statistical noise is more to blame.

For example, when it comes to looking at a set of results, extreme outcomes are more likely to emerge the smaller the sample size in question (loc. 1981). However, because System 1 is keen to come up with a cause for any given phenomenon, it is susceptible to come up with a causal (and erroneous) explanation for this purely statistical effect.

To take a real-world example, a group of researchers recently went looking for the markers that allow a school to be successful (loc 2116). One of the things that they found is that smaller schools tend to have more impressive results. For example, it was found that “of 1,662 schools in Pennsylvania… 6 of the top 50 were small, which is overrepresentation by a factor of 4” (loc. 2119). On the basis of this evidence, the researchers concluded that smaller schools are better than larger ones. This conclusion influenced “the Gates Foundation to make a substantial investment in the creation of small schools, sometimes by splitting large schools into smaller units” (loc. 2119). The conclusion also influenced the policies of several other charitable organizations, as well as the U.S. Department of Education (loc. 2119).

There’s just one problem, though: the conclusion is false. The simple fact of the matter is that the smaller population of the smaller schools skewed the numbers. As Kahneman explains, “if the statisticians who reported to the Gates Foundation had asked about the characteristics of the worst schools, they would have found that bad schools also tend to be smaller than average. The truth is that small schools are not better on average; they are simply more variable” (loc. 2126).

The example goes to show that the failure to appreciate the effects of small sample sizes affects not only laymen, but also experts (who should know better). And it turns out that this mistake is not so rare. For example, one study found that “psychologists commonly chose samples so small that they exposed themselves to a 50% risk of failing to confirm their true hypotheses!” (loc. 2017). The problem was that these psychologists neglected to go through the relatively simple math needed to identify the proper sample size that was required, and instead relied on their mistaken intuitions (loc. 2012-15).

ii. Mistaking Luck for Causes

Our proclivity to look for causal explanations also leads us astray in cases where luck plays a part. For example, the business media is eager to heap praise on successful companies and to laud their CEOs. However, the long term statistics indicate that luck plays a very large part here. To take a few examples, Kahneman points out how “on average, the gap in corporate profitability and stock returns between the outstanding firms and the less successful firms studied in Built to Last shrank to almost nothing in the period following the study. The average profitability of the companies identified in the famous In Search of Excellence dropped sharply as well within a short time. A study of Fortune’s ‘Most Admired Companies’ finds that over a twenty-year period, the firms with the worst ratings went on to earn much higher stock returns than the most admired firms” (loc. 3804). (In statistical lingo, these are examples of ‘regression to the mean,’ which refers to the tendency of outstanding but lucky results to return to statistical norms over time [loc. 3238-49.)

As for the CEOs of these companies, Kahneman grants their actions can make a difference; but adds that the statistics reveal that their effect is much less than the business media makes out (loc. 3754). As the author explains, “a very generous estimate of the correlation between the success of the firm and the quality of its CEO might be as high as .30, indicating 30% overlap” (loc. 3759). To put this in perspective, if you were to compare two firms, “a correlation of .30 implies that you would find the stronger CEO leading the stronger firm in about 60% of the pairs—an improvement of 10 percentage points over random guessing, hardly grist for the hero worship of CEOs we so often witness” (loc. 3766).

b. Statistical Illiteracy

Our difficulty in appreciating the effects of statistics goes well beyond mistaken assumptions of causality. An important case in point here is our tendency to discount statistical base rates that are important in making sense of many types of scenarios. To illustrate this point, consider the following problem: “a cab was involved in a hit-and-run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data: -85% of the cabs in the city are Green and 15% are Blue; -A witness identified the cab as Blue. The court tested the reliability of the witness under the circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time. What is the probability that the cab involved in the accident was Blue rather than Green?” (loc. 3063).

If you are like most people, you answered ‘80%’ (loc. 3068). But this number is wrong. It takes into account the reliability of the witness, but completely ignores the base rate that indicates the percentage of cabs in the city that are Blue. Both bits of information are essential in calculating the correct probability, but most people completely discount the base rate. When both bits of information are considered we get the correct answer, which is 41% (loc. 3066). (The mathematics that detail how to incorporate base rates into a problem falls under Bayes’ rule [loc. 3066]).

Statistical base rates are important to consider in many types of problems, but unless people have formal training in statistics, they mostly ignore them. And this is especially true in cases where there is other information available to draw our attention away from the base rate. As Kahneman explains, “statistical base rates are generally underweighted, and sometimes neglected altogether, when specific information about the case is available” (loc. 3091).

11. Loss Aversion

a. Avoiding Losses

One final way that System 1 influences (and distorts) System 2 is that it is loss averse. That is, it is pleased by gains, and upset by losses, but it is more upset by a loss than it is pleased by a gain of the same amount. As Kahenaman puts it, “when directly compared or weighted against each other, losses loom larger than gains” (loc. 5150). This is a natural result of the fact that System 1 evolved to keep us safe from moment to moment: “this asymmetry between the power of positive and negative expectations or experiences has an evolutionary history. Organisms that treat threats as more urgent than opportunities have a better chance to survive and reproduce” (loc. 5171).

To witness loss aversion in action, consider the following example: “You are offered a gamble on the toss of a coin. If the coin shows tails, you lose $100. If the coin shows heads, you win $150. Is this gamble attractive? Would you accept it?” (loc. 5189). Rationally speaking, this is a very good gamble to take, because the expected value is positive (loc. 5189). But most people reject this gamble, because “for most people, the fear of losing $100 is more intense than the hope of gaining $150” (loc. 5191). This is loss aversion.

Now, people differ in just how loss averse they are, so to get an indication of how loss averse you are ask yourself the following question: “what is the smallest gain that I need to balance an equal chance to lose $100?” (loc. 5191). Most people say somewhere in the range of $200 (loc. 5191). Indeed, as the author explains, “the loss aversion ratio has been estimated in several experiments and is usually in the range of 1.5 to 2.5” (loc. 5197).

Loss aversion finds its way into our everyday lives in many interesting ways. For example, it is well known that consumer demand responds to prices—in that price drops increase demand, and price rises decrease it. However, there is an asymmetry here in that demand drops off more sharply when prices are raised than it increases when prices are dropped. As Kahneman explains, “as economists would predict, customers tend to increase their purchases of eggs, orange juice, or fish when prices drop; however, in contrast to the predictions of economic theory, the effect of price increases… is about twice as large as the effect of [decreases]” (loc. 5444). This is because we interpret price drops as a gain, and price rises as a loss, and as we have seen, losses loom larger than gains (loc. 5444).

Loss aversion also comes into play in many types of negotiations, and especially “renegotiations of an existing contract, the typical situation in labor negotiations and in international discussions of trade or arms limitations” (loc. 5582). In these situations, any given change in the pre-existing terms is likely to be seen by one of the sides as a concession to the other (loc. 5582). Since losses are felt more keenly than gains, the side that stands to lose on any new measure will fight harder against it than the other side fights for it (loc. 5585). This makes it very difficult to establish any changes (loc. 5585). And things get particularly dicey in cases where the circumstances require all parties to take a hit: “negotiations over a shrinking pie are especially difficult, because they require an allocation of losses. People tend to be much more easygoing when they bargain over an expanding pie” (loc. 5587).

b. Cutting Our Losses

Loss aversion is also at play when it comes to the difficulty we sometimes experience with cutting our losses. This is because cutting one’s losses—though generally choice-worthy because it avoids bigger losses in the long run—entails actualizing a loss in the moment, which is always hard on system 1. As Kahneman explains, “the thought of accepting the large sure loss is too painful, and the hope of complete relief too enticing, to make the sensible decision that it is time to cut one’s losses” (loc. 5856).

One common manifestation of this phenomenon is the temptation to hang on to a losing stock—which temptation is especially strong when there is a decision to be made between selling a loser or selling a winner. (loc. 6333). For selling a losing stock means actualizing a loss, while selling a winner means actualizing a gain, and for System 1 there is a clear tendency to side with the latter over the former (loc. 6349). The effect of this is well-established. As Kahneman notes, “finance research has documented a massive preference for selling winners rather than losers—a bias that has been given an opaque label: the disposition effect” (loc. 6333).

Selling winners rather than losers is a significant error, though, for a couple of different reasons. To begin with, as stands to reason, winning stocks tend to outperform losers (“at least for a short while” [loc. 6349]), and the net effect is significant (loc. 6349). What’s more, actualizing a loss reduces your taxes, while actualizing a gain increases them (loc. 6343). This is a basic fact that is well known by investors. Indeed, the one month of the year when the disposition effect is eliminated is December, when investors have taxes on the brain (loc. 6344). But there is no good reason why this tendency should not prevail throughout the year, for as Kahneman points out, “the tax advantage is available all year.” Nevertheless, “for 11 months of the year mental accounting prevails over financial common sense” (loc. 6346).

Difficulty in cutting our losses also comes up in business situations, and even in military conflicts. For example, “businesses that are losing ground to a superior technology waste their remaining assets in futile attempts to catch up. Because defeat is so difficult to accept, the losing side in wars often fights long past the point at which the victory of the other side is certain, and only a matter of time” (loc. 5858).

c. Risk Aversion

For Kahneman, one of the most pernicious ways that loss aversion rears its head is that it make us risk averse even in cases where the potential benefits outweigh the potential losses (such as in the case of the gamble between winning $150, and losing $100). While our fear of losses may make it look reasonable to avoid these types of gambles, the fact is that if we consistently play the odds we stand to gain in the long run (loc. 6219-24). Because this approach involves thinking of the long-term (rather than the short), Kahneman refers to it as broad framing (loc. 6243). Broad framing can helps us with many types of decisions, from choosing which insurance policy to purchase (loc. 6263), to choosing what stocks to buy (and sell) (loc. 6242-47), to setting a business plan (loc. 6275-87).

12. Expert Intuition

We have spent a good deal of time pointing out the many ways in which System 1 misleads us, but there is at least one domain where System 1 actually performs very well. This is the aspect of System 1 that acts as the hero in Malcolm Gladwell’s famous book Blink: The Power of Thinking Without Thinking. It comes into play in many types of activities and professions, from chess playing, to basketball to firefighting (loc. 4371, 4414). For example, “chess masters are able to read a chess situation at a glance. The few moves that come to their mind are almost always strong and sometimes creative” (loc. 4389). Similarly, there is the case of “the firefighter who has a sudden urge to escape a burning house just before it collapses, because the firefighter knows the danger intuitively, ‘without knowing how he knows’” (loc. 4351, 238).

Kahneman refers to these examples as demonstrations of ‘expertise,’ or ‘intuitive skill.’ However, the term ‘expertise’ is probably more accurate; for while the capacity is certainly intuitive (in that it is entirely unconscious), it does not come naturally, but instead takes years of experience to develop and hone (loc. 250). To take the chess example, Kahneman points out how “studies of chess masters have shown that at least 10,000 hours of dedicated practice (about 6 years of playing chess 5 hours a day) are required to attain the highest levels of performance” (loc. 4376).

Nor can intuitive expertise be developed in just any field. Instead, it is limited to those fields where there is a good deal of order and regularity, and where there is plenty of opportunity to observe and absorb this order and regularity (loc. 4409). This is because intuitive expertise is developed by way of unconsciously picking up on salient cues in the environment. As Kahneman explains, “accurate intuitions… are due to highly valid cues that the expert’s System 1 has learned to use, even if System 2 has not learned to name them” (loc. 4415). The scholar Herbert Simon explains intuitive expertise even more succinctly—“he writes: ‘The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition” (loc. 252).

Expert intuition can certainly be very accurate, and therefore, the true expert is justified in the confidence she feels in it. There is also a trap here, though, for this same feeling of confidence can also cover intuitions that come from much more dubious sources (incomplete information, for example, or substitution). And there is often no certain way to tell just where our intuitions are coming from (loc. 3434). What’s more, the same activity or profession may include some aspects that give themselves over to expert intuitions and some that do not. Therefore, even experts remain at risk of generating, and falling for spurious intuitions (loc. 256).

13. Conclusion

Learning about the many ways that System 1 thinking leads us astray can certainly help us in our personal and professional lives. But Kahneman insists that the lessons here should also be considered in policy-making. In particular, Kahneman argues that measures can and should be taken that are designed to “nudge people to make decisions that serve their own long-term interests” (loc. 7604). Some of these measures have in fact already been adopted (loc. 7639), and I will close by way of mentioning just a few of them: “applications that have been implemented include automatic enrollment in health insurance, a new version of the dietary guidelines that replaces the incomprehensible Food Pyramid with the powerful image of a Food plate loaded with a balanced diet, and a rule formulated by the USDA that permits the inclusion of messages such as ‘90% fat-free’ on the label of meat products, provided that the statement ‘10% fat’ is also displayed ‘contiguous to, in lettering of the same type as, and on the same color background as, the statements of lean percentage’” (loc. 7644).