The availability heuristic, or availability bias, is a mental shortcut we use when trying to determine the likelihood of an event. This shortcut works by assuming that the probability of an event is correlated with the ease with which supporting examples come to mind. However, as is the nature with all mental shortcuts, relying too heavily on any given shortcut can leave us susceptible to various biases.
This post pulls heavily from the research of Amos Tversky and Daniel Kahneman, the originators and experts of heuristics and biases. I reference, primarily, their paper written in 1974 titled "Judgment under Uncertainty: Heuristics and Biases" as well as Kahneman's seminal book, Thinking Fast and Slow (links and all sources in the bibliography).
The availability heuristic is a mental shortcut in which we determine the likelihood of an event by the ease with which examples come to mind.
Availability is a useful tool for assessing probability because generally instances of higher frequency are recalled better and faster than instances of lesser frequency. Unfortunately, as demonstrated by the following biases, our ability to bring to mind examples (availability) is affected by factors other than just frequency and probability.
Tversky and Kahneman provide two examples of availability in that 1974 paper: "one may assess the risk of heart attack among middle-aged people by recalling such occurrences among one's acquaintances. Similarly, one may evaluate the probability that a given business venture will fail by imagining various difficulties it could encounter." 
This post will show how these two lines of thinking are deeply flawed.
Biases Stemming from the Availability Heuristic
Back to Top ↑
Again, taking from that 1974 paper, relying on availability when making decisions can lead to a few predictable biases:
When examples are more easily remembered we're likely to overweight their probabilities of occurring. This can distort our decision making because the things we remember may not be any more likely to happen. Ease of recall is based in familiarity and salience.
The more familiar you are with an event, the more likely you will be to overweight a similar event's probability of occurring.
In one experiment, groups of people were read different lists of names of famous people. Unknown to the groups, in some lists the women were more famous than the men and on the other lists the reverse was true.
Later, the participants were asked if there were more men or women on the list. Every group erroneously concluded that the sex with more famous names was also more numerous. Their familiarity with certain names influenced their perception of the structure of the list.
The more vivid a memory you can recall, the more likely you will be to overweight the probability of that event happening. A vivid image is a result of both a clear image and events being recent.
People will be more likely to overstate the probability of a house fire after watching a video of a house burning versus reading an article of a house burning.
Additionally, recent occurrences are more likely to be recalled than earlier occurrences.
It is a common experience that the subjective probability of traffic accidents rises temporarily when one sees a car overturned by the side of the road. 
Biases Bue to the Effectiveness of a Search Set
When asked whether it is more likely that a word starts with "R" or has "r" as the third letter, people will try and assess the relative frequency by which examples for each come to mind.
Because it is much easier to mentally search for words by their first letter than by their third, people overweight the number of words with that consonant as the first letter. In reality, letters like "r" and "k" are much more likely to be in the third position than in the first.
In 1984 Tverksy and Kahneman demonstrated the search set bias again when they asked participants to estimate the frequency of seven-letter words that had the letter “n” in the sixth position. They were asked to compare the frequency of seven letter words with "n" as the sixth letter, and seven letter words ending in "ing." Participants were likely to say that the words ending in "ing" were more common, likely because it's easy to think of words ending in "ing."
This response is incorrect because all seven letter words ending with “ing” also have an “n” in the sixth position. By definition, the former must be more common.
Similarly, if asked to compare the frequency of abstract words (such as love) with concrete words (such as door), people will be able to think of more situations and contexts for "love" to appear in a book than for "door."
If the frequency of words is judged by the availability of the contexts in which they appear, abstract words will be judged as relatively more numerous than concrete words. 
Biases of Imaginability
When trying to assess the frequency of a class whose instances are not stored in memory, we may have to evaluate probability as generated by a given rule.
Consider a case involving a group of 10 people who form committees of k members, 2 ≤ k ≤ 8 (between two and eight, inclusive). How many different combinations of members can be formed? The correct answer is given by the binomial coefficient, 10Ck, which reaches its maximum of 252 at k=5.
Importantly, the number of committees formed by k members equals the number of committees formed by (10-k) members. Clearly, for every group made of three members, a unique corresponding group of seven members is inadvertently created.
Approaching this problem without computation involves trying to mentally construct committees of k members and then evaluate the ease with which examples come to mind. It is almost always easier to bring to mind various committee combinations of two members than it is for eight members. Further, it's a common first step to break the group into disjoint sets. One can easily construct five disjoint sets of two numbers while it is impossible to generate even two disjoint sets of eight members.
If frequency is assessed by imaginability or "availability for construction," smaller committees will appear more numerous than larger committees, as repeated in multiple studies. In truth, the frequency is a bell curve around the mid-point.
Imaginability has a large impact on our evaluations of real-life situations.
The risk involved in an adventurous expedition, for example, is evaluated by imagining contingencies with which the expedition is not equipped to cope. If many such difficulties are vividly portrayed, the expedition can be made to appear exceedingly dangerous, although the ease with which disasters are imagined need not reflect their actual likelihood. Conversely, the risk involved in an undertaking may be grossly underestimated if some possible dangers are either difficult to conceive of, or simply do not come to mind. 
We are also likely to misjudge frequencies when we falsely correlate two or more data points. One of the most famous early examples of this was called the "draw-a-person test."
In this experiment, panels of random judges were presented with hypothetical information on several mental patients. This information included a clinical diagnosis and a drawing of a person made by the patient.
Later the judges estimated the frequency with which each diagnosis (such as paranoia or suspiciousness) had been accompanied by various features of the drawing (such as peculiar eyes). The subjects markedly overestimated the frequency of co-occurrence of natural associates, such as suspiciousness and peculiar eyes. This effect was labeled illusory correlation. 
This bias of illusory correlation is extremely resistant to contradictory data. The bias was present even when the true correlation was negative, and it blinded the judges from seeing correlations that actually were present.
Availability provides a natural account for the illusory-correlation effect. The judgment of how frequently two events co-occur could be based on the strength of the associative bond between them. When the association is strong, one is likely to conclude that the events have been frequently paired. Consequently, strong associates will be judged to have occurred together frequently.
Influence of the Media
Back to Top ↑
Kahneman, in his seminal book, Thinking Fast and Slow, wrote,
People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media.
The availability heuristic can lead us to make poor decisions when we misjudge the frequency and magnitude of events. Consider the retrievability bias discussed above. The retrievability bias says that if something is easily recalled in memory it must occur with a high probability.
Our susceptibility to this bias can be manipulated by the media, partially because of the limitations of our memory.
We remember things more quickly and more clearly when they come with a vivid narrative. However, just because something can be remembered easily and quickly doesn't necessarily make it any more likely to happen.
As a result of these biases we can have severely distorted perceptions of risk. We can find ourselves preoccupied with things that are never likely to happen and completely ignoring very serious risks that may be right around the corner.
Shane Parrish in his blog, Farnam Street, discusses distorted perceptions of risk as a result of media manipulation in a post titled 3 Things You Should Know About the Availability Heuristic.
When we make decisions we tend to be swayed by what we remember. What we remember is influenced by many things including beliefs, expectations, emotions, and feelings as well as things like frequency of exposure. Media coverage (e.g., Internet, radio, television) makes a big difference. When rare events occur they become very visible to us as they receive heavy coverage by the media. This means we are more likely to recall it, especially in the immediate aftermath of the event. However, recalling an event and estimating its real probability are two different things. If you're in a car accident, for example, you are likely to rate the odds of getting into another car accident much higher than base rates would indicate .
These biases, particularly the retrievability bias can substantially and unconsciously influence our judgment. Too often we assume that our memories and recollections of events are representative and true. Therefore we discount all events that are outside of our immediate memory.
Aftermath of 9/11 Terrorist Attacks
In the months following September 11th, 2001, many travelers made the decision to travel more frequently by car than by plane after coming to the conclusion that it was safer.
At the time, this seemed like an obvious and rational decision. The probability of danger while traveling by air seemed much greater, in large part because it was so easy to visualize and imagine something bad happening on a plane.
In reality, air travel has never been safer than it was in the months following 9/11. The first explanation was the dramatically increased security and vigilance in airports. The second explanation was the masses of people who flooded to the streets to avoid air travel increased traffic and made the roads much less safe, causing an additional several hundred deaths.
Gerd Gigerenzer took a deeper look at this and found that in the three months following 9/11 there were an additional 350 road deaths, more than the combined casualties of the four fated flights.  He continues,
This number of about 350 lost lives is an estimate of the price Americans paid for trying to avoid the risk of flying... Terrorism causes fear, an emotion we all have felt after the tragic circumstances in which 266 passengers and crew members (as well as many more people at the scenes of the attacks) were killed in the four fated flights. That same fear, however, seems to have caused a second toll of lives, which has apparently gone unnoticed.
Preventing terrorist attacks is difficult, and governments all around the world are focusing on this task. Avoiding the second, psychologically motivated toll could be comparatively easy and inexpensive, if the public were better informed about psychological reactions to catastrophic events, and the potential risk of avoiding risk. 
Impacts on Investing and Finance
Back to Top ↑
One aspect of the availability bias is that we tend to overweight recent events. Through an investing lens, an investor's lingering perceptions of the market environment can lead to poor decision making.
After a significant market downturn, investors are likely to view opportunities with an overly pessimistic lens. This causes people to reject most levels of investment risk no matter how small the anticipated returns on a "safe investment."
What availability bias tells us is that investors’ lingering perceptions of a dire market environment may be causing them to view investment opportunities through an overly negative lens, making it less appealing to consider taking on investment risk, no matter how small the returns on perceived “safe” investments.
Our perceptions of risk can be easily manipulated, or primed, based on our present mindset. Dan Ariely, author of "Predictably Irrational: The Hidden Forces That Shape our Decisions” says we don't know our preferences as well as we think we do. He reminds us that the environment and other people can change our minds to a large degree. He provides an example of a financial advisor who can prime a clients risk tolerance by asking different questions. He goes on,
“Imagine if I was a financial advisor and you came to talk to me about your risk attitude, and I started the discussion by asking you to describe how you felt in the last three years on the days when your portfolio lost 5% of its value. Then I asked you what your risk attitude was. Most people would say they don’t want to ever experience days like that again. On the other hand, what if instead I talked about people I knew who were retired and living in the Bahamas, fishing and golfing. Now your risk attitude would probably be different.”
There are numerous implications for availability bias for investors such as buying insurance.
Karlsson, Loewenstein, and Ariely (2006) showed that "people tend to weigh experienced information more heavily than observed experience. The recent evidence ... suggest[s] that probably the premium given to experienced information is mediated by preferences. This impact on preferences is probably mediated by emotions. Indeed, we have difficulty imagining any other plausible explanation for such a differential reaction when informational content is held constant" 
In another example from the same paper, people are more likely to buy insurance after experiencing a negative event than they are to purchase the same insurance before the negative event happens. 
Neighbors Who Disagree About the State of the Economy
The availability heuristics leads us to overweight events that are personally most relevant, recent or dramatic. As individual investors, this means our perceptions of the overall market are colored by our personal experiences that don't necessarily represent the whole economic picture.
One neighbor whose house just lost 25% of its value and whose spouse was recently laid off is unlikely to see or feel an economic recovery even as the overall housing market improves or unemployment numbers tick downwards.
The other neighbor who just landed a great job with a raise is inclined to see it as proof that the market is improving.
In both cases, the experiences of a single person aren't likely to reflect a national norm. For this reason, it's very easy for an investor's perception of market performance to diverge from real performance.
Franklin Templeton Investments ran an annual survey called the Global Investor Sentiment Survey asking people how they felt the S&P 500 Index performed that year. The survey was conducted in 2009, 2010, 2011, and 2012, the four years following the global recession. 
The results per year are below but the takeaway is that the majority of investors were simply out of touch with the improving economy.
- In 2009, 66% of investors said the market was down or flat. In reality, it grew 26.5%.
- In 2010, 49% of investors said the market was down or flat. In reality, it grew 15.1%.
- In 2011, 70% of investors said the market was down or flat. In reality, the market was slightly positive.
- In 2012, 31% of investors said the market was down or flat. In reality, the market grew 16.1%.
In other words, in the minds of investors, the lingering perceptions of painful events were impacting investing decisions long after those events are over. 
Relationship with Attentional Bias
Back to Top ↑
The availability bias overlaps, in part, with the attentional bias. The attentional bias says that a person's perceptions are influenced by their thoughts at the time. In other words, we're influenced by what we pay attention to.
To illustrate this, let us imagine that you like to sit in your backyard as a way to relax after work. As you sit in your lawnchair, you frequently notice a beautiful, bright red cardinal. Your kitchen has a window that overlooks your backyard, but you've never seen the cardinal from the kitchen.
You think to yourself, "That vibrant cardinal only comes to the backyard while I'm out here."
In situations like this, there are always four possible outcomes.
- You are present and the cardinal is present.
- You are present and the cardinal is not present.
- You are not present and the cardinal is present.
- You are not present and the cardinal is not present.
You're only paying attention to outcome 1. You're ignoring outcome 2 and outcomes 3 and 4 are unknowable.
We can often remember vivid examples of rare events. Stories of people winning the lottery, shark attacks, and homicides are frequently covered by media organizations, but an event being memorable doesn't mean it's any more likely to happen. Here are a few more examples of the availability bias.
Doctors who see multiple patients with a rare condition are likely to continue to see the condition in future patients even when their symptoms could be more easily explained with a more common diagnosis.  Further, "the availability heuristic may mislead physicians by causing them to believe that random variations in the prevalence of a non-epidemic disease represent real trends." 
Many people believe they are more likely to be murdered than they are to die from stomach cancer. As discussed above, this is largely due to the media. Homicides are very frequently reported and often in vivid detail. In reality, you're five times more likely to die from stomach cancer, which is already fairly rare. Stomach cancer kills less than 0.003% of the population.  Homicides make up less than 0.00005% of deaths. 
The Matthew Effect, The Exposure Effect, and The Von Restorff Effect
Back to Top ↑
The Matthew Effect
Matjaž Perc wrote,
The Matthew effect describes the phenomenon that in societies, the rich tend to get richer and the potent even more powerful. ... Cumulative advantage and success-breads-success[sic] also both describe the fact that advantage tends to beget further advantage. The concept is behind the many power laws and scaling behaviour in empirical data, and it is at the heart of self-organization across social and natural sciences. 
Herbert Simon, a Nobel Prize winning social scientist wrote in his autobiography, Models of My Life:
I soon learned that one wins awards mainly for winning awards: an example of what Bob Merton calls the Matthew Effect. It is akin also to the phenomenon known in politics as “availability,” or name recognition. Once one becomes sufficiently well known, one's name surfaces automatically as soon as an award committee assembles.
The Exposure Effect (The Familiarity Principle)
The exposure effect states that people develop a preference for things with which they are familiar. Robert B. Zajonc wrote in his paper The Attitudinal Effects of Mere Exposure, "repeated exposure of the individual to a stimulus object enhances his attitude toward it." 
Titchener (1910), one of the first to document this effect, wrote that the exposure effect leads people to experience a “glow or warmth, a sense of ownership, a feeling of intimacy.” 
The exposure effect also works with negative stimuli. When repeatedly exposed to negative stimuli, we are more likely to lose our patience and increase our aggression.
The Von Restorff Effect (The Isolation Effect or The Novelty Effect)
Von Restorff is often attributed with the discovery of a phenomenon called the distinctiveness effect in memory. RR Hunt wrote,
Events that are incongruent with their prevailing context are usually very well remembered. ... The core laboratory paradigm for studying distinctiveness in memory research has long been the isolation paradigm. This paradigm ... yields better memory for an item categorically isolated from surrounding items than for the surrounding items and a proper control item. 
In other words, we're pretty good at remembering when we notice "one of these things is not like the other." Things that stand out are more likely to be remembered and recalled.
Shane Parrish provides a simple example,
For example, if I asked you to remember the following sequence of characters “RTASDT9RTGS” I suspect the most common character remembered would be the “9” because it stands out and thus your mind gives it more attention. 
UNDERSTANDING OUTLIERS AND ANECDOTES
The Significance of Understanding Base Rate Information
Back to Top ↑
Base rates refer to the relative frequencies with which certain states or conditions occur in a population.
The concept denotes the same as prevalence, a term often used by epidemiologists. A base rate is defined for and restricted to a specified population. In other words, a base rate is the a priori chance or prior odds that a member of a specified population will have a certain characteristic, assuming that we know nothing else about this person other than that he or she is a member of the population we are examining .
Our insensitivity to base rates emanates from the representativeness heuristic and is a common psychological bias.
John Hammond, Ralph Keeney, and Howard Raiffa wrote a great book called Smart Choices: A Practical Guide to Making Better Decisions. In that book they provide a well-known example of the impacts of misunderstanding base rate information. I've reproduced those paragraphs here.
Donald Jones is either a librarian or a salesman. His personality can best be described as retiring. What are the odds that he is a librarian?
When we use this little problem in seminars, the typical response goes something like this: “Oh, it's pretty clear that he's a librarian. It's much more likely that a librarian will be retiring; salesmen usually have outgoing personalities. The odds that he's a librarian must be at least 90 percent.” Sounds good, but it's totally wrong.
The trouble with this logic is that it neglects to consider that there are far more salesmen than male librarians. In fact, in the United States, salesmen outnumber male librarians 100 to 1. Before you even considered the fact that Donald Jones is “retiring,” therefore, you should have assigned only a 1 percent chance that Jones is a librarian. That is the base rate.
Now, consider the characteristic “retiring.” Suppose half of all male librarians are retiring, whereas only 5 percent of salesmen are. That works out to 10 retiring salesmen for every retiring librarian — making the odds that Jones is a librarian closer to 10 percent than to 90 percent. Ignoring the base rate can lead you wildly astray.
Max Bazerman offers a similar example in Judgment in Managerial Decision Making:
(Our tendency to ignore base rates) is even stronger when the specific information is vivid and compelling, as Kahneman and Tversky illustrated in one study from 1972. Participants were given a brief description of a person who enjoyed puzzles and was both mathematically inclined and introverted. Some participants were told that this description was selected from a set of seventy engineers and thirty lawyers. Others were told that the description came from a list of thirty engineers and seventy lawyers. Next, participants were asked to estimate the probability that the person described was an engineer. Even though people admitted that the brief description did not offer a foolproof means of distinguishing lawyers from engineers, most tended to believe the description was of an engineer. Their assessments were relatively impervious to differences in base rates of engineers (70 percent versus 30 percent of the sample group.)
Participants do use base-rate data correctly when no other information is provided. In the absence of a personal description, people use the base rates sensibly and believe that a person picked at random from a group made up mostly of lawyers is most likely to be a lawyer. Thus, people understand the relevance of base-rate information, but tend to disregard such data when individuating data are also available.
Ignoring base rates has many unfortunate implications. … Similarly, unnecessary emotional distress is caused in the divorce process because of the failure of couples to create prenuptial agreements that facilitate the peaceful resolution of a marriage. The suggestion of a prenuptial agreement is often viewed as a sign of bad faith. However, in far too many cases, the failure to create prenuptial agreements occurs when individuals approach marriage with the false belief that the high base rate for divorce does not apply to them.
Lastly, Sanjay Bakshi in this conversation talks about outliers and anecdotal evidence.
One of the great lessons from studying history is to do with “base rates”. “Base rate” is a technical term of describing odds in terms of prior probabilities. The base rate of having a drunken-driving accident is higher than those of having accidents in a sober state.
When you evaluate whether smoking is good for you or not, if you look at the average experience of 1,000 smokers and compare them with a 1,000 non-smokers, you’ll see what happens.
People don’t do that. They get influenced by individual stories like a smoker who lived till he was 95. Such a smoker will force many people to ignore base rates, and to focus on his story, to fool themselves into believing that smoking can’t be all that bad for them.
Sure, there’ll be exceptions. But we need to focus on the average experience and not the exceptional ones. The metaphor I like to use here is that of a pond. You are the fisherman. If you want to catch a lot of fish, then you must go to a pond where there’s a lot of fish. You don’t want to go to fish in a pond where there’s very little fish. You may be a great fisherman, but unless you go to a pond where there’s a lot of fish, you are not going to find a lot of fish.
So one of the great lessons from studying history is to see what has really worked well and what has turned out to be a disaster – and to learn from both.
HOW TO AVOID AVAILABILITY BIAS
Back to Top ↑
To overcome the effects of the availability bias, it's important to make a concentrated effort to think of examples contrary to where the availability bias is leading you. Take a minute to think about the number of people you know who haven't been murdered, who haven't won the lottery, and who haven't been attacked by a shark. If you're like most people you'll realize that this is a really large number of people. It's a helpful tool when becoming too anxious over a situation.
The most important lesson to be learned is to maintain a sense of perspective. We need to be vigilant and look at each new problem starting with the root cause. If we understand how our minds are tricked by outliers or vivid evidence then we can take information from the news, for example, and act accordingly.
When coming to conclusions, rely on facts and data rather than your gut.
Sources and References
Back to Top ↑
 - Tversky, Amos, and Daniel Kahneman. "Judgment under Uncertainty: Heuristics and Biases." Science, 1974. Web.
 - Gigerenzer, Gerd. "Dread Risk, September 11, and Fatal Traffic Accidents ." Psychological Science, 2004. Web.
 - "Time to Take Stock." Time to Take Stock - Perception Vs. Reality. Franklin Templeton Investments, 2013. Web.
 - "3 Things You Should Know About the Availability Heuristic." Farnam Street. N.p., 29 Apr. 2017. Web.
 - Klein, Jill G. "Five pitfalls in decisions about diagnosis and prescribing." BMJ : British Medical Journal. BMJ Publishing Group Ltd., 02 Apr. 2005. Web.
 - Poses, R. M., and M. Anthony. "Availability, wishful thinking, and physicians' diagnostic judgments for patients with suspected bacteremia." Medical decision making : an international journal of the Society for Medical Decision Making. U.S. National Library of Medicine, n.d. Web.
 - "Cancer Stat Facts: Stomach Cancer." National Institute of Health, n.d. Web.
 - "National Center for Health Statistics." Centers for Disease Control and Prevention, 17 Mar. 2017. Web.
 - Perc, Matjaž. "The Matthew effect in empirical data." Journal of the Royal Society Interface. The Royal Society, 06 Sept. 2014. Web.
 - Zajonc, Robert B. "The Attitudinal Effects of Mere Exposure." Institute for Social Research Library. Research Center For Group Dynamics Institute For Social Research , Sept. 1965. Web.
 - Titchener, E.B. (1910). Textbook of psychology. New York: Macmillan.
 - Hunt, R. R., and C. A. Lamb. "What causes the isolation effect?" Journal of experimental psychology. Learning, memory, and cognition. U.S. National Library of Medicine, Nov. 2001. Web.
 - Simonsohn, Uri, Niklas Karlsson, George Loewenstein, and Dan Ariely. "The tree of experience in the forest of information: Overweighing experienced relative to observed information." Games and Economic Behavior. Science Direct, 18 Jan. 2006. Web.
- "What is Availability Bias?" Innovateus.net. N.p., n.d. Web.
- "Availability Bias." What Is Availability Bias? (Cognitive Bias). N.p., n.d. Web.
- Read, J. D. (1995), The availability heuristic in person identification: The sometimes misleading consequences of enhanced contextual information. Appl. Cognit. Psychol., 9: 91–121.
- "The Availability Bias: Why People Buy Lottery Tickets." PsyBlog. N.p., 06 Aug. 2012. Web. 08 Aug. 2017.
- Briñol, Pablo, Richard E. Petty, and Zakary L. Tormala. "The Malleable Meaning of Subjective Ease." Psychological Science. Association for Psychological Science, 1 Mar. 2006. Web. 17 July 2017.
- Tversky, A. and Kahneman, D. (1984). Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment. Psychological Review, 90, 293-315.