It's 2017, and you decide that it's probably time to sell your 1999 Mercury Sable Four-Door Wagon. It's a shame to see it go, though, because this thing sure was a real firecracker back in the day! We're talking fabric seats, child safety rear door locks, and a 0-to-60 time that is well under 25 seconds in optimal conditions (as long as your bike isn't in the trunk). Despite being overwhelmed by nostalgia, you muster up the strength to power through and post an ad on Craigslist. Now all there's left to do is to sit back and wait for the responses to start flooding in!
After about two weeks of silence, however, you are shocked to discover that the market for an eighteen year old, high-mileage wagon is surprisingly anemic. "What is going on?" you wonder to yourself, perplexed.
As you rack your mind for answers, you come to terms with the fact that perhaps you've been seeing the car through rose-colored glasses. You're willing to admit that it may have a few minor issues that could potentially detract from its value. Sure, the air conditioning no longer works. And yes, your toddler did paint on the passenger-side door back in 2016. And if you're really being honest with yourself, the fact that your neighbor cut the brake lines several weeks ago after a heated dispute regarding where your dog has been going to the bathroom probably isn't working in your favor.
So with a clearer mind, you head back to Craigslist to reevaluate your posting. The original wording reads:
Looking to sell my 18 year old Mercury Sable Four-Door Wagon. It's all the quality and luxury you've come to expect from Mercury. This baby has aged like a fine wine! That being said, there is no A/C. Also, a baby has painted on it (but she's very advanced for her age). Oh, and if you're a stickler for things like stopping, this probably isn't the vehicle for you. But otherwise, it's pretty dependable. Definitely a steal for someone who doesn't get all bent out of shape about things like obeying traffic laws.
Okay, so you realize that there's a tiny chance your post might not be painting the most flattering picture of your beloved car, but what's an honest person like yourself to do? You don't want to lie, but you do want to improve your chances of selling this car. Eventually, with a little bit of creative wordplay, you craft a new ad that you're confident will attract more interest. It reads:
Selling my vintage car. This thing is incredibly hot! Brand new paint job. I promise that once you start driving this thing you won't be able to stop!
Yup, that's definitely better. A bit misleading, perhaps, but technically you haven't lied at all; you've just creatively presented the information.
In psychology, this type of creative presentation is called framing. Framing effects refer to the disparate ways people respond to identical information depending on how it is presented. Essentially, psychologists have discovered that changing the preferences of an individual doesn't necessarily require supplying them with different options, but rather framing the existing options in different ways.
The cover photo of this article provides us with an amusing first example. Here we have two choices of frozen yogurt. One reads "80% Fat Free," while the other reads "Contains 20% Fat." Would you be more likely to choose one over the other? The answer is most likely yes, and if that is indeed the case, then you've just succumbed to the effects of framing. The content of the two cartons is identical, but the way the content has been described influences which one we are more likely to select.
The Kahneman And Tversky Classic
Some of the most famous examples of framing effects come from the work of the famous duo of Amos Tversky and Daniel Kahneman. Perhaps the most well-known demonstration is what has been dubbed the "Asian disease" problem (Tversky & Kahneman, 1981).
(Note: Modern critics have not been shy about vocalizing their displeasure with Kahneman and Tversky's choice to specify the disease problem as "Asian." Such critics maintain that it is an addition whose inclusion unnecessarily stigmatizes an already marginalized group, and one that provides no added value to the problem at hand. While Kahneman admits that he wouldn't use the label today, he notes that the problem was originally conceptualized in the 1970s and at that time the Asian label proved helpful because it forced people to recall the seriousness of the Asian flu epidemic of 1957.)
The problem goes like this: The U.S. is preparing for an outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed.
- Program A: 200 people will be saved.
- Program B: There is a 1/3 probability that 600 will be saved, and a 2/3 probability that no people will be saved.
So, which program would you support? If you're like 72 percent of students polled at Stanford University and the University of British Columbia, you probably opted for Program A. However, let's take a moment to reframe the options and see if your preferences change.
- Program A: 400 people will die.
- Program B: There is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die.
Have your preferences reversed? Chances are, they have. When presented with this set of options, respondents now tend to overwhelmingly select Program B (78 percent). But take a look at both sets of options again. You'll notice that despite your likely reversal of opinion, the content of each program remains identical in both the first and second set, it's only the manner in which the content is presented that has been reframed.
When options are framed as gains, we become risk-averse. When options are framed as losses, we become risk-seeking — and we gamble to avoid them.
In the first set, the options are framed in a positive fashion, as gains (i.e. people who will be saved). When presented with potential gains, the majority of us tend to opt for the sure things; we become risk-averse, and so we choose Program A which guarantees to save 200 lives. However, in the second set, the options are framed in a negative fashion, as losses (i.e. people who will die). Humans are loss averse. When presented with potential losses, the idea of accepting guaranteed losses is bothersome, and thus we tend to accept greater risks in an attempt to avoid them; we become risk-seeking, and so we now choose to gamble with Program B which offers the opportunity (albeit improbable) of no loss at all.
What's perhaps most concerning about these framing effects is that their impact is not confined to non-experts. Everyone is potentially susceptible to the allure of well-crafted framing. Even public health professionals — whose job is to design and implement effective health policy as well as make critical decisions about how to handle medical concerns on a large scale — were influenced by how the Asian disease problem was framed (Kahneman, 2011).
Framing In The Real World
Harnessing our natural aversion to loss is a popular way to operationalize framing effects. As was discussed in a prior article, physicians were far less likely to recommend surgery for patients with lung cancer when the risks of surgery were framed as losses (i.e. "10 percent of patients die") than as gains (i.e. "90 percent of patients survive"; McNeil, Pauker, Sox Jr., & Tversky, 1982). Savvy businesses are also well aware of consumers' aversion to loss, and they go to extraordinary lengths to ensure their goods and services are being framed in the most effective (read: flattering) manner.
Take, for instance, credit cards. In his wonderfully informative book Misbehaving: The Making of Behavioral Economics (2016), Dr. Richard Thaler tells the story of an early battle fought by the credit card lobby — a battle that many who were unfamiliar with the psychology of consumer behavior probably viewed as frivolous, but a battle that in all likelihood probably earned (or saved, depending on how you frame it) credit card companies millions of dollars.
In the 1970s, retailers found themselves in a bit of a precarious situation. Credit card usage was increasing rapidly, but the convenience of this relatively new form of payment was confined to only one partner in the transaction: the customer. Credit card companies would routinely charge retailers a fee for the privilege of allowing their customers to pay with their plastic. Due to the rising popularity of this form of payment, retailers didn't want to ban it completely, but they did want to find a way to offset the fees they were being charged by big credit card companies. In response, many gas stations threatened to institute policies that would charge customers who chose to use credit cards a higher price than those paying with cash. Essentially, the policy would state that you could pay the regular price with cash, or the regular price plus an additional surcharge if you wanted to pay with credit card. Understanding how critical this notion of regular price was to consumers, the credit card lobby quickly intervened.
However, rather than fighting the content of the policy, they opted to target the way in which it was being framed. They understood that if people saw the cash price as the regular price, any additional fees charged as a result of a credit card transaction would be experienced as a loss. To circumvent this, they requested that the credit card price be referred to as the regular price, and that cash-wielding customers would be entitled to a discount. In this way, instead of using your credit card being seen as a loss, it was now seen as the norm, and using cash was now seen as a gain. Though the economic repercussions of the transactions are identical regardless of how they are framed, the psychological repercussions are most certainly not.
Frequency Versus Probability
However, losses and gains are not the only ways to reframe options. Similar to how humans process losses and gains differently, they also respond differently to the way we choose to frame numerical statements.
Allow me to provide an example. Suppose you and your friend are each told about the destructive potential of a hurricane heading your way. You are told "this hurricane is predicted to kill 2,400 out of every 10,000 people" while your friend is told "this hurricane is predicted to kill 24% of a population." If you are each asked to evaluate the risk, which of you is likely to rate the hurricane as being more dangerous?
More than likely, you will rate the hurricane as posing a greater threat than your friend will. For you, the hurricane's predicted devastation was communicated in terms of frequency. Frequencies provide individuals with a concrete figure of occurrences within a population (e.g. 2,400 deaths). Frequencies answer the question "how many?" For your friend, the hurricane's predicted devastation was communicated in terms of probability. Probabilities provide us with what could be referred to as relative frequencies, or how likely events are to occur within a particular population, specified or unspecified (e.g. 24% of people will die). Probabilities answer the question "how likely?"
For humans, frequency framing has a more potent psychological impact than probability framing. In fact, Yamagishi (1997) demonstrated that the power of this framing mechanism prevails even when one of the risks is objectively superior to the other. He presented two groups of participants with statements about the deadly nature of a disease. Group 1 saw the statement "the disease kills 1,286 people out of every 10,000." Group 2 saw the statement "the disease kills 24.14% of the population." Clearly, the disease presented to Group 2 is a greater danger. In a population of 10,000, it would eradicate 2,414 people — almost doubling the number of lives the disease presented to Group 1 would end. However, despite the disease shown to Group 2 being an objectively superior threat, participants in this group rated it as less dangerous than participants in Group 1, who were exposed to frequency framing and thus rated their disease as more alarming. The title of Yamagishi's paper says it all: "When a 12.86% Mortality is More Dangerous than 24.14%: Implications for Risk Communication."
The obvious question here, of course, is why? Why are we influenced more by numerical figures framed as frequencies rather than probabilities? Our first clue comes from a prior article, where we spoke about how people are rarely moved by probabilities and faceless statistics. It is for this reason that a statement such as "29 percent of all Syrians will be displaced by violent conflict" — a statement which implies that hundreds of thousands people will be impacted — fails to elicit the raw, emotional reaction that a photo of a single, drowned Syrian child can. We are driven by emotion far more than by rationality, and for that reason appealing to statistical evidence rarely compels us to act. Statistics are not easy to visualize. Visualizing something as abstract as "29 percent" of a population is challenging, effortful. Visualizing a concrete number (let's say 1,000), on the other hand, is easy. We can imagine what 1,000 people fleeing from a land torn by chaos would look like; we can almost feel it.
The second clue regarding the potency of stating numbers as frequencies comes from our tendency to pay disproportionate attention to numerators, a phenomenon known as denominator neglect (Reyna, 2004).
What do I mean by numerators? Well, let's use shark attacks as an example. In the United States, there is an average of 16 shark attacks annually, resulting in less than one death per year (National Geographic, 2005). And how many people go to the beach each year in the United States? Well, since 2005, the average total annual visits has been well over two hundred million (United States Lifesaving Association, 2017). So, the probability of being attacked by a shark each year is roughly 16/200,000,000, or 0.000008%, while the chances of being killed by a shark each year 0.5/200,000,000, or 0.00000025%. But when you turn on the news, and you watch in silent disbelief as the news anchor details the gruesome attack that unfolded at a beach not too far from your own, you can't help but feel like the odds aren't quite so miniscule. The vivid images aroused by the 16 shark attacks (the numerator) weigh far heavier on your mind than the banality of the 200,000,000 incident-free excursions (the denominator).
In the United States, there is an average of 16 shark attacks per year, out of over 200 million annual beach visits. Yet the vivid image of those 16 attacks weighs far heavier on the mind than the 200 million uneventful excursions.
Denominator neglect often plays a starring role when we use what is known as the availability heuristic. The availability heuristic is a type of mental shortcut that uses immediate examples that come quickly to mind as a means of rendering judgments and generating assessments. For instance, if after a highly-publicized shark attack I were to ask you how likely you are to be attacked by a shark, you would scour your mind for all you know about shark attacks. The recent attack would be quickly and easily retrieved from memory (i.e. it would be easily available) and incidents where people were not attacked by sharks would be neglected, leading you to overestimate the likelihood of being attacked. Psychologists have found that the influences exerted by denominator neglect and the availability heuristic cause people to drastically overestimate the likelihood of events like plane crashes, natural disasters, and terrorist events (Sunstein, Gilovich, Griffin, & Kahneman, 2003).
Savvy politicians leverage the tendency of the public to succumb to denominator neglect when they want to push forth policy. Look no further than the proposed ban on Muslim immigrants. Proponents of the ban seek to stoke fear by emphasizing the numerators. I call this the "...but what about...?" tactic. When presented with a statistic such as, over the past ten years, an American is over thirty times more likely to be killed by a lawnmower than a radicalized Islamic immigrant (CDC, 2014; New America, 2017), a supporter of the ban will invariably respond with some variation of "but what about [insert singular event]?" Notice that they have not disputed the statistic in question, but rather shifted attention to an emotionally-arousing, statistically-improbable example of a numerator. Unfortunately, this tactic often works. Similar to how we have a hard time accurately assessing the actual threat of a shark at the beach following a highly-publicized story of a gruesome attack, we have an equally challenging time accurately assessing the actual threat posed by Islamic immigrants after being reminded of a violent event they played a part in. Our ability to process statistics is obfuscated and overridden by these intense emotional responses. It's easy to cultivate fear, it's challenging to cultivate statistical literacy.
(Note: If you do support the ban, though I do not agree with you, I respect that it is well within your rights to do so. I recognize that an influx of immigrants (of any kind) comes with social and economic implications, all of which are worthy of consideration and civil discourse. However, if your support is predicated on the notion that the ban "promotes the safety of the American people," I hope you're simultaneously working thirty times as hard to support the ban of lawnmowers. If your support is truly a byproduct of your unwavering concern for the safety of your fellow citizens and not, say, an underlying xenophobic sentiment cowardly masquerading as a non-partisan security measure, it's the only rational thing to do.)
Nudging With Framing
Framing is a magnificent way to operationalize a strategy known as nudging. Nudging, well-known to psychologists and behavioral economists who study choice architecture, asks "how can we increase the likelihood that people make certain decisions without explicitly telling them to do so?"
Take organ donations. In many countries, individuals are presented with an opportunity to become organ donors when they get their driver's license or file a series of formal government paperwork after reaching adulthood. The enrollment statistics for organ donation programs in various countries show that the percentage of individuals choosing to become organ donors is far lower for some countries than others. Why might that be?
When posed with this question, the majority of us immediately get to work attempting to create a coherent story about the inherent differences between the two sets of countries. Our instinctive reaction is to attribute the sharp divergence to differences in cultural makeup or dispositional characteristics among the populations. "Perhaps it has something to do with the manner in which parents teach their children about altruism?" we reason. "Or maybe it has something to do with different social norms regarding how and when you should help your neighbor?" But what many of us fail to appreciate is that often times the true cause of such results is not differences concerning the decision-makers themselves, but rather differences in how the decision is framed.
In the case of the organ donation discrepancy, it was as simple as whether the default option on the form was opt-in or opt-out (Johnson & Goldstein, 2003). In other words, countries where the default option was non-participation tended to have people who didn't participate, and countries where the default option was enrollment tended to have people who "decided" to become donors. The vast differences we observe regarding whether people choose to participate in a potentially life-saving program have very little to do with decision-makers weighing the costs and benefits of such programs; rather, these critical decisions are being driven by whether or not a box comes pre-checked or if people have to check it themselves!
People can be lazy and suggestible. However, in the spirit of politeness, psychologists like to reframe this sentiment by referring to humans as "cognitive misers." By cognitive misers, we mean that humans tend to only expend a bare minimum of energy when making judgments and decisions (especially relatively trivial ones). This is because the human brain has not evolved to be accurate, it has evolved to promote survival. As an instrument optimized for survival, it must have a balance of accuracy and efficiency. Too much emphasis on accuracy and survival is compromised because an organism would take too long to make decisions; too much emphasis on efficiency and survival is compromised because speed means very little if you're constantly making the wrong decisions. In short, when it comes to making decisions, evolution has crafted our brains to be capable of doing a pretty good job, most of the time.
Framing and nudging rely on this characteristic laziness in order to exert their pronounced effects. Much of the impact of these strategies is based on making certain choices more appealing due to their convenience. Similar to how something as trivial as whether or not a box is pre-checked on a form can dramatically impact organ donation, we can improve eating habits simply by making healthy foods more easily accessible in cafeterias (Hanks, Just, Smith, & Wansink, 2012), improve retirement savings simply by making automatic contributions the default (Ly, Mazar, Zhao, & Soman, 2013), and improve environmentally-conscious choices simply by making relevant information more easily accessible to consumers (Thaler & Sunstein, 2008).
Framing In Politics And Language
Policymakers on both sides of the aisle work to frame identical policies in such a manner as to nudge their constituencies to support or reject them. The estate tax is a prime example. In the United States, this tax is levied during the transfer of the estate (e.g. property, assets, etc.) of a deceased person to a living one. Opponents of the estate tax, in an effort to curry political disfavor for the policy, started referring to it as the "death tax." These types of linguistic gymnastics, although superficial in their impact on the actual content of the tax code, are highly effective propaganda tactics that can sway voters by stirring a sense of resentment that neutral words like "estate" simply cannot. If we were rational, calculating decision-makers, the use of different linguistic cues should not manipulate our likelihood of support — but we aren't, and they do.
Is the statement "Hillary lost" different from the statement "Trump won?" In a way, no. If we are willing to concede it was a two-party race (i.e. there wasn't a feasible chance a third-party candidate would win), then either statement necessarily implies the other; they are identical in terms of the state of reality they reflect. However, as should hopefully be evident by now, statements and choices that reflect identical realities do not necessarily elicit identical responses. How issues and choices are framed matters. Whether you're a politician proposing a bill, a lawyer arguing a case, or a business owner marketing a product, your presentation is just as important as your content. Why leave a decision up to someone else when you can help to make it for them?