Over the 2010s, and briefly around COVID, the idea of a “nudge” was really popular. The concept emerged in the 2008 book Nudge by economist (and later Nobel Laureate) Richard Thaler and law professor Cass Sunstein, and turned into a veritable sensation in government, politics, and academia. The core premise: use marketing to trick people (who are stupid) into not doing bad things. Is nudging a good idea?
N-U-D-G-E-T-O-G-O
Imagine you’re a lunch lady at you local school, and you’re a buzzkill so you want kids to eat more vegetables. What can you do? Well, you can tell them to do so, or make them pay for the non-healthy option, or ban enjoyable lunches. But another option is to change how and where you put each thing, if you’re a considerate buzzkill, so they choose differently without giving up the possibility of choice.
This kind of solution is called a nudge. The book Nudge talks about something called “choice architecture”, which sounds a tad totalitarian but amounts to “designing how people choose things”. So if you’re laying out the food on the cafeteria, or writing a menu, or setting up a website, you are deciding how people make decisions. IKEA designs all their stores so it’s really easy to get lost in them, to make you buy more things. A nudge is simply rearranging the choice architecture so that the preferred outcomes by the “choice architect” are favored, but not to the exclusion of others.
The biggest example of a nudge success story is, according to the book, organ donations: dead people are weirdly attached to their organs, but also, living people support donating their organs after they die. The problem comes from the fact that most countries have a system where, to be an organ donor, you have to do a lot of lengthy paperwork -and because people are lazy, distracted, and prioritize immediate things, then they don’t get around to signing up even if they want to. To get around the opt-in problem, nudgers (that sounds wrong) propose making it so people have to opt out of organ donation, so it’s the default. In principle, that should make people donate their organs more often, but still let people who really want to not donate be able to make that decision over their bodies.
But why would not having to sign a form change what people do? Well, there’s a type of economics called behavioral economics, which Thaler is a leading light of (and that’s why he won his Nobel) which examines how people make decisions. In economics, normally, it’s assumed people choose things by solving complicated mathematical problems1 and, because that’s been considered not a great model for some 50-plus years, there has been research in the opposite direction. Behavioral economists, unlike most economists, rely on experiments to draw their conclusions: normally, they make up some toy scenario and have people make a decision. So, for instance, they came up with a concept called “loss aversion” where losing X amount of money is perceived as worse than gaining X amount of money you already had (sidenote: this is different than “gaining X is worse than losing X”, which isn’t the same thing or a new insight, it’s literally just decreasing marginal utility).
I’m not super into behavioral economics, but you can read Daniel Kahneman’s Thinking Fast and Slow for some additional insights on the field. Kahneman, another Behavioral Econ Nobel Laureate, tries to find out how people make decisions: he notes that humans have an intuitive system, which makes decisions such as “should I crouch if a ball is going in my face’s direction”, and a more analytical system, which solves problems such as “what 401k plan should I choose”. Because the second system takes up a lot of energy, the brain tends to take shortcuts, which makes people bad at making really complicated decisions, and biases them in certain predictable ways, that can be experimentally found and accounted for.
Nudges play off of this: people get really tired by having to do forms or look into complicated things, so they tend to avoid it, which means they make choices that are bad simply because they don’t have to put in a lot of really boring work. For instance, choosing a retirement plan is usually extremely tedious, so people put it off for too long; making it a default to have them choose some plan means they would have to go out of their way to sign out of it, avoiding a bad scenario.
The whole ideological project behind nudging is something called Libertarian Paternalism, where people are given as much latitude over their lives as they want, but certain choices are (dis)incentivized by the choice architecture to achieve socially positive ends. So doing good things should be easy, and doing bad things should be hard, but doing both good and bad things should be allowed. Importantly, you don’t really want to make things fundamentally different (for example, “taxing something”), but rather, make the way the choice is made different so that better choices are made based off the same information, preferences and incentives.
Before moving on, there’s an issue with the book that makes discussing it fairly annoying: the term “nudge” gets progressively stretched out to mean “any small solution to a problem”. A lot of examples are things such as marketing campaigns, which while potentially effective, but don’t really count as tinkering with the choice architecture - they’re just trying to make a choice more popular. Some other things are “paying fines”, which are not a nudge. The weirder ones are sending letters to incentivize, say, paying taxes, which might be nudges in a sense, but when the letter says “do your taxes or you’ll go to prison”, then it’s not really a nudge. Another example is a “dollar a day” program for teenage girls who don’t get pregnant, except that’s not a nudge, it’s a conditional transfer - it actually disincentivizes things. And it doesn’t even work, by the way. And it’s not even “small solutions” all the time - the book proposes solving the political fight around gay marriage by abolishing the institution of marriage and replacing it with a feminist state-run alternative as a compromise with social conservatives.
I don’t feel like it would be productive to try to go through each chapter and debunk it specifically like with Freakonomics or Why Nations Fail, so instead I’ll link to a very good podcast about the contents of the book empirically and the political project.
Let a thousand nudges blossom
In September 12 2024, a federal judged dismissed a defamation lawsuit by Harvard Business School professor Francesca Gino against Harvard University and the blog Data Colada. The essence of the lawsuit was that both a post by Data Colada and a subsequent report by Harvard found that Gino had committed serious academic misconduct - in particular, data in her academic work was forged, tampered with, or duplicated. Gino categorically denied the allegations, blamed her research assistants, and sued both Data Colada and her employer for defamation.
This followed a long history of allegations of academic misconduct against behavioral scientists (a lot of them in business schools), particularly against Gino herself and her mentor Dan Ariely. In both cases, papers about (hilariously enough) dishonesty were found to have various, well, dishonest decisions - rows of data moved, fabricated data points, or various other dishonest practices.
I don’t think that this is a specific indictment of the behavioral science profession (which largely condemned Gino and sees her lawsuit as a lazy attempt at silencing her critics), or think it’s especially relevant to the nudge concept, but it still held up this post for like an entire year because I think it can be telling about the limitations of this approach to scientific inquiry.
Gino and Ariely’s papers are, to be fair, extremely banal- answering dumb and pointless questions like “does wearing counterfeit designer sunglasses make you lie on questionnaires” or “does signing an honesty pledge on forms change your response to those forms”. It’s basically all in this tweet: this is all small potatoes, and mostly driven by the imperatives of academic life, so the core of this thing ends up being: what kind of questions can behavioral science even answer? Since it generally focuses on nudging people into some (un)acceptable behavior, the question has to shift into “do nudges actually work, the definition of nudge nonwithstanding?” Well, kinda: the ones that work work, and the ones that don’t don’t Wow! Earth-shattering discovery.
Getting people to act differently around food gets them to waste less of it. Reminders for people to show up for welfare appointments, sign up for health insurance, show up to court dates, or vote tend to work. Behavioral interventions to make paying traffic fines easier or to reduce stigma in reproductive health clinics work too. Making things like Medicaid (i.e. health insurance for poor people) opt-in, rather than opt-out, does reduce re-enrollment. Administrative burdens are actually burdensome, and have negative consequences. Fundamentally, making it easier for people to do good things that they might want to do is a good idea, and it’s good to take how people make decisions into account in order to design the decision-making systems.
Other nudges… don’t work so much. Nudges of any kind to get people vaccinated against COVID-19 just did not work. Framing certain things as losses instead of gains (a classic behavioral topic) doesn’t tend to work. Nudging people to do environmentally conscious things doesn’t work either. When you try to get people to take COVID precautions using nudges, it only works in people not at risk of dying (younger people), whereas at-risk populations don’t see an impact. Nudging people to behave more honestly only works when they feel like there will be social repercussions. So fundamentally it seems that rearranging menu items can’t really change people’s preferences - it can get them to decide on things that they don’t especially care about (the weird loss aversion experiments, COVID precautions for those not at risk, etc) but not things they have opinions on (vaccines, the environment). Weirdly, honesty is all over the place too.
You can also look at meta-analyses and other aggregations of studies. The most favorable, by Mertens et al., found medium sized effects for nudges in the aggregate, with notable disparities: food is the category where nudges are most effective, health and finance are a hit and a miss, and the environment is generally a flop. Additionally, interventions that remind people to do something are more effective, changing the decision structure can be effective, and reminders of social desirability are basically pointless. A Randomized Controlled Trial from Obama’s Behavioral Insights Unit (basically the Nudge Navy Seals) found that nudges can be effective, with some caveats, but have relatively small effects.
But wait - the Randomized Controlled Trials (more on this here) find that nudges can be effective but not always, and that the effects are small, whilst the meta-analysis of published nudge literature finds that they’re frequently quite effective. Something doesn’t smell right - it’s because of publication bias. Basically, government nudge units publish everything they do (mostly because they get told to), whilst academics have strong incentives to publish true results. This means that nudges evaluated by governments will be evaluated somewhat fairly, whilst nudges evaluated by academia will be overblown (even when the underlying data is real). This goes so far that the Mertens et al. meta analysis finds no effect of nudging after accounting for publication bias, and that there’s serious concerns about the representativity of published academic nudge studies.
This spurred quite a bit of debate about whether nudging “works”, and the policy approach nonwitstanding, I fundamentally don’t think this is a smart way to guide a research agenda. A lot of this stuff is just hyped up by and for clever chattering class types - it’s Freakonomics with experiments. Finding that nudging people to get COVID vaccines doesn’t work because their mind is made up, but that maybe people will procrastinate on their appointments unless reminded to, is important stuff. For example, exploiting biases to improve health outcomes may be useful (or not, as per Mertens et al.), and failed behavioral studies can give us clear boundaries on how and when nudging works. But the problem is that quibbling around with forms is just not very interesting - especially when it comes to applications outside of academia. ), but you also need to contend with the fact that people will optimize behavior if they believe that certain behaviors are optimal
In this sense, the narrow view of rationality leads to overly deterministic predictions about irrationality, that lead to overly broad claims about what can be achieved by nudging. Rationality in economics is less a description, and more a property of how people act. If you ignore this, you run into the risk of reducing optimizing behaviors in more complex games into “logical errors”, or into extending behavioral biases into weird places. Fundamentally you need some sort of underlying theory of human behavior and to make values-based judgments around how people will want to make certain decisions. I used to be fairly skeptical of behavioral economics as a whole, but I will admit that there seem to be clear, persistent biases that can make it possible to improve outcomes by toying around with how decisions are made - so that policy problems can be effectively addressed.
The slow nudging of hard planks
So, how much can tinkering with forms and websites accomplish? It appears that you can nudge people to do things they want to do but won’t do, but can’t nudge them to do things they don’t want to do. The problem, as I’ve said above, is that sometimes you want people to (not) do things they do(n’t) want to do. So can nudging guide public policy?
Kinda, once again. For example, there’s a lot of “behavioral traps” in why people don’t report domestic abuse, so you can try to find them and address them to increase reporting rates. A bunch of countries built “Nudge units” (basically teams that designed nudges) and they had some pretty mixed results, mainly because their ideas were generally fairly stupid. The squirrelly definition of a nudge, and the strange scope of problems to be solved, make evaluation quite hard. Plus, the Nudge research agenda being fairly limited means its impacts on public policy are fairly limited: same as RCTs, there’s problems with what issues it can address, how they can address them, and whether the solutions scale up or not.
Let’s take the example of organ donations: Nudge proposes that the problem is that some countries have opt-in systems (where people need to sign up to become organ donors), so that donations can be increased by changing it to opt-out (so people need to sign out of being donors). This would mean that an opt-in country like the US could raise its organ donation rates to levels such as opt-out countries like global leader Spain. Except… there are significant heterogeneities within opt-out and opt-in countries. If you look at countries that switched, you find no conclusive evidence that switching to opt-out increases organ donation rates, mainly because the number of cases in which nobody has a strong opinion is vanishingly small. In fact, Spain (which is both an opt-out pioneer and the global leader in donations) has a very thorough and robust system to ensure maximum donations.
So the focus on opt-in versus opt-out feels a tad disingenuous. There’s this book called Men Are From Mars, Women are From Venus that says that the reason so many couples fight is that women ask “could you do chore X” instead of “would you do chore X”. And the Nudge Ideology is basically that, but for policy problems: rather than think through the actual systems of incentives and decisionmaking involved, you have to do this one weird trick to make everything instantly better.
The problem plaguing the Nudge Mindset is, basically, a confusion between what problems can be solved at the individual level and what problems have to be solved at a higher, collective level. Why this happens is a whole different thing, but many social problems are clear-cut coordination failures. For example, many companies have used nudges to make cancelling a subscription really difficult - so the question is, are individual consumers powerful enough to force them to change course? No. You need some kind of coordination mechanism to exert pressure - for example, government regulation, or in the case of organ donations, even a market.
The clearest example is climate change. As stated above, it is basically impossible to get people to be “nudged” into being environmentally conscious - mainly because the issue is that people want to do the “bad” things. The ethics of using nudging to make environmental progress are a bit fraught (maybe you don’t want to just trick people into doing the right thing) but fundamentally it just doesn’t work, at least not the way that combinations of taxes on polluting things and subsidies on green things work. The problem at hand, at its core, is that people are very consciously trading off “greenery” for cheaper, easier, or simply more comfortable lifestyle choices.
This gets to a problem with the “political view” of Nudgism: it’s just not true that you can solve major problems this way. It’s not really possible to just One Simple Trick away major problems. The reason why this framing goes astray is, I think, twofold: first, because many of these major problems require people to have different sets of incentives and preferences, not just a different way to make their choices with the incentives and preferences already given. While individuals may optimize on medical advice, it’s also not really possible to just Nudge away the opioid epidemic, for instance. The limits of human rationality do play a role here - but the role they play isn’t “make the forms to get opioids really complicated”, it’s “get the government to reign in doctors who are overprescribing at the behest of pharmaceutical companies”.
At its core, the Libertarian Paternalism mindset relies a tad too heavily on individual action for collective problems. For one extremely banal example, take this article by Richard Thaler on subscriptions that are too hard to cancel - Thaler explicitly brings up government policy, but his final conclusion is that individual consumers have to utilize the free market to persuade companies to change course. That’s not very smart.
Conclusion
All in all, it seems that you can safely nudge people into doing things they want to do but for behavioral reasons don’t do, and into doing things that they don’t have much of an opinion about, but not into doing things they consciously don’t want to do.
This leads to a lot of the confusion for Nudgism: correct claims about the boundaries of human rationality get overextended into humans being barely rational, and thus being easily tricked. Exploiting people’s biases to secure win-win outcomes is a good idea, but it is not smart to assume that these biases run so deep that any problem can be solved by exploiting them, or to ignore that perhaps people are responding to incentives and preferences rather than following their biases.
Sometimes, rather than a nudge, you need a push, or even a shove.
This is an oversimplifcation bla bla bla read my other posts about this - “rationality” is used as shorthand for behavior that the average person will take if they respond to costs and benefits in very broad ways. It’s also experimentally fairly close to reality.
I’m reminded of one political consequence of nudge theory, specifically when Thaler’s associate and nudge theory exponent Cass Sunstein was hired by Obama to design economic policy during the Recession. There’s little documentation online on how much influence Sunstein had on the big stimulus package itself, but he did head an influential bureau called the Office of Information Regulatory Affairs.
What is known (primarily from Michael Greenwald’s book “The New New Deal”) is that nudge theory was used to shape the design of a key payroll tax credit. Rather than just send out checks as Bush the Younger did (and as Trump and Biden would later do), the credit would automatically appear in workers’ bank balances, without being a visible line item in some cases. This sudden bump would in theory get people spending. Of course, not many people knew about it, particularly the unbanked, hence why Trump and Biden staked their reputations on checks. It might have even been part of why Obama’s party lost their lower house majority in 2010.
Supporting documentation: https://www.nytimes.com/2010/10/19/us/politics/19taxes.html
https://democracyjournal.org/arguments/keep-it-simple-and-take-credit/
https://prospect.org/day-one-agenda/oira-reclaiming-the-deep-state/
The fact N-U-D-G-E-T-0
O-G-O doesn't event scan to the song properly make some hate you more.