The FTX fiasco has drawn a lot of attention to an ideology called Effective Altruism, of which FTX founder Sam Bowman-Fried was a supporter and financial backer. But what is Effective Altruism (or EA) and what are its economic implications?
The Life You Can Save
To get into EA, you have to get into its intellectual foundations, which are mostly philosophical and largely come from three major philosophers: Peter Singer, Peter Unger, and Derek Parfit.
To start off, all three are utilitarians. Utilitarianism is one of the three main flavors of ethics, alongside virtue ethics and deontology. Virtue ethics relies on personal virtue as the key to goodness, from which good actions and good outcomes flow; meanwhile, deontology and utiltiarianism posit that good actions, not good vibes, make you a good person. The difference is that deontologists think good actions come from adhering to strict sets of rules, while utilitarians believe that it comes from maximizing welfare. So, for instance, a deontologist or a virtue ethicist wouldn’t sacrifice one person to save five others, but almost any utilitarians would, since one is more than five1. Of course, not all utilitarians agree on how to define welfare (is it the maximization of “pleasure” or the minimzation of “pain”?), who defines what pleasure and pain are, at what scale, or how to aggregate them.
The two Peters, Singer and Unger, are famous utilitarians and their contribution is very straightforward. The most relevant piece of writing by either is Singer’s “Famine, Affluence, and Morality”, a 1972 paper on ethics. The question Singer asks can pretty much boiled down to: would it be morally acceptable to not save a drowning child because you’d ruin your shoes? No? Then why are you buying shoes instead of donating money to starving children in Africa. Singer’s argument, at its core, is that you have moral duties to mimimize suffering (and “fewer shoes” < “dying”) or maximize wellbeing (and “not dying” > “new shoes”), so you should give away lots of money to charity - in fact, all the money you can spare. Later, in “The Life You Can Save” (terrific book) he walks that back to be more applicable, and limits it to giving money to worthwhile causes - 10% of your income if you’re normal and then all of it over a certain, very high number. Unger (who I’m less familiar with) takes it further and (basically) says that not only should you give all of your money to charity, you should also steal a lot of money from other people and donate it. Both think that you have a very strong duty to donate money (a lot of it) to charity, and both emphasize that some issues are so severe that they deserve the money more - think homeless people in your area versus starving African children.
Parfit, on the other hand, is a bit more esoteric.The starting point is that, since people change a lot over time (personality, tastes, interests, relationships, sometimes even genders), then personal identity as a concept is nonsense. Therefore, there isn’t really a distinction between you now, you in the future, and other people either now or in the future - we’re all the same, across time and across space. This reinforces the same arguments for donating - if you buy shoes, you’re giving money to the (relatively well-off) future you, instead of poor African children, and that’s immoral. But since the wellbeing of specific people doesn’t matter, you have to focus on total wellbeing instead, and that leads to something called the repugnant conclusion: a society with a lot of very unhappy people is always better than a society with fewer but happier people, as long as the lives in the first society have wellbeing greater than zero.
This is What We Believe
Effective Altruism is a very eclectic movement (read more on it here) and its philosophical foundations are more complex than what I just mentioned, but in general, EAs all share three core beliefs:
Strong Interest-Based Reasons: there are some needs people have that are so severe that intervention to fix them is morally justifiable by itself.
Cosmopolitan Impartiality: to do good, you need to focus on the objective benefits of acting, over where you’re helping and who the recipient is.
Evidence-Based Decision Making: to decide who and how to help, you need to look at concrete scientific evidence showing it’s the best use of your money.
Now, the first one might be hard to justifiy because not everything is fair game (read more about that here) and some people might think that not saving a drowning child because he’s far away is morally correct. Obviously all three main currents are compatible with EA: you could make a utilitarian or virtue ethicist or some other weird niche school like moral situationist case for EA as well. But very few EAs are virtue ethicists and basically none are deontologists, and this leads to very awkward debates like “is giving money to poor people more important than whether their government is genociding some of the people we want to help”, or how much human rights matter, or any number of thorny philosophical issues2.
EAs focus, overwhelmingly, on issues that are tractable (can be solved), neglected (often ignored), and important (go read the dictionary). But of course, what each means is up for debate, and some issues are not neglected at all precisely because they're so important - immigration is extremely relevant, which is why it's so un-neglected. And it's not like “do good well” is an often ignored idea: people just disagree on what good and well mean.
Because most EAs are utilitarian, and because Parfit and especially Unger are big influences in the EA movement, then there’s a big item to EA philosophy that is worth bringing up: earn-to-give. Earning to give is the belief that, if you’re not able to contribute a lot to important causes by working on them directly (for example, doing charity work, or researching new vaccines), then you should focus on making as much money as possible instead, by whatever means you want, and then give away a lot of it to effective charities. This is not uncontroversial and the fact that a major EA backer (Bowman-Fried) made his money by ripping off innocent people makes it even less so. And there's a case to be made against it on the merits: if you justify anything because it helps worthwhile causes, soon enough you start justifying vicious actions.
In terms of the actual concrete causes that EAs care about, there’s (roughly speaking) two major flavors: neartermists and longtermists. 3Neartermist EAs focus on issues of the here and now: primarily, alleviating global poverty and are more generally. Longtermist EAs follow Singer to a larger extent, and think that maximizing humanity’s chances of existing for a really long time are orders of magnitude more important than the parochial issues of malaria and deworming (because of the Repugnant Conclusion). But EA is also a quasi political movement with emphasis on issues like animal welfare (Singer specifically is big on animal rights).
Here and Now
What’s the agenda of neartermism? It’s actually very normal meat-and-potatoes development economics, and mostly concerned with issues such as malaria, deworming, and things like traffic projects in developing nations.
A REALLY long time ago I wrote a pretty viral post on international aid, and I’ll re-up some of that debate here. There’s three types of ideas in the dev econ world:
The Big Push: a large increase in investment in [insert relevant thing here] would move a developing nation forward and allow it to exit a poverty trap.
Governance: oor countries have bad governments with bad incentives who implement bad policies, so fixing that is the problem.
Randomistas: finding the fundamental causes of the wealth and poverty of nations is a fool’s errand, so you have to instead try to resolve technical issues.
The neartermist movement is overwhelmingly in the third camp, which is famous for its use of randomized controlled trials or RCTs: to evaluate effectiveness, a group that receives a treatment is compared in various outcomes to one that doesn’t. This approach is very influential in modern dev econ because it provides very reliable answers, and it’s gotten so big three major proponents (Esther Duflo, Abhijit Banerjee, and Michael Kremer) won the Nobel Prize in Economics in 2018. Their main issue is that finding the fundamental causes of growth is very very hard (if it wasn’t, we’d know already), and that because the One Weird Trick To Become Switzerland isn’t going to be found anytime soon, you should focus on how to get teachers to actually go to school and teach. This isn’t uncontroversial, even on technical grounds, and a lot of big RCT policies are super controversial, such as deworming, or require balancing different types of risks and benefits against each others, such as deworming.
The main plank of the neartermist approach is to rely on quantitative policy evaluation (especially RCTs) to direct funding to the most worthwhile causes. GiveWell is paradigmatic of this approach: it is a charitable organization that aims to find which charities are the most effective at saving lives on a per dollar basis, and based on that criterion it recommends donating to them for that purpose.
Of course, that the neartermist approach focuses on small-scale technical fixes to concrete issues of human suffering has the exact same type of problem of the RCT movement: it can’t solve the actual problem, which is that some countries are poor, so it heavily neglects the political issues at that core (Acemoglu gets some things right) and falls into a lot of critiques concerning human rights. There’s also the fact that kitchen table economic issues per se are usually not paid a lot of attention to, even though policies such as housing deregulation or open borders would have enormous material benefits - the former to rich nations and the latter to the world’s poor. But the problem is, once again, political: those policies have concentrated costs and diffused benefits (and also racism and stuff), and the people who benefit do not want to give it up - because they’re not EA and place themselves, not others, first.
EA… in… SPACE!
The longtermist EA causes are way wackier and probably exposed to a lot more criticism. Following Parfit and his repugnant conclusion, they focus on a pretty concrete idea: between peace, a nuclear war that anihilates 99% of humanity, and humanity’s extinction, the first two are pretty much indistinguishable on a long enough timescale. So longtermists want to focus on preserving makind over the long term - we could exist for another million years, give or take, so African children are peanuts compared to the trillions of people who’d live in that period.
So far so good. But exactly what they think are the risks to long-term human survival is… up for debate. These major risks, called existential risks, are many, but the three main one EAs think they can influence are artificial intelligence, pandemics (especially human-made ones), and nuclear war. Pandemic prevention is all good because it’s not that complicated (just spend a lot of money on vaccines and take precautions in labs) but the other two are… not especially clear.
My issue with nuclear war prevention per se is that it’s just a classic collective action problem. If one country has nukes, then it can boss everyone around, so a bunch of other countries will want them too. Therefore, the only rational response to a geopolitical rival having nukes is to have them, and to officially promise to fire them back if attacked, and maybe even do a first strike. And it’s irrational to give them up unilaterally, because you’re abandoning all your leverage. Therefore, the only possibility is that everyone denuclearizes at once, and good luck getting Russia, China, the US, North Korea, Iran, and Israel all to agree on anything.
AI safety research has a very simple Hayek-Lucas case to be made against it: we just don’t know anything about AI to claim anything certainty. Most AI and technology related claims have been laughably wrong for reasons we can’t foresee. For example, Alan Blinder claimed that a quarter of US jobs were subject to offshoring… and his claims were just patently wrong. Automation and unemployment will have no relationship because the Federal Reserve will decide if more demand is needed to decrease joblessness. To quote Paul Krugman:
You may quarrel with the Fed chairman's judgment--you may think that he should keep the economy on a looser rein--but you can hardly dispute his power. Indeed, if you want a simple model for predicting the unemployment rate in the United States over the next few years, here it is: It will be what Greenspan wants it to be, plus or minus a random error reflecting the fact that he is not quite God.
The creation of artificial intelligence will be a vast, broad project undertaken by large groups of people separately and without any centrally motivating design - the kind of thing that is really difficult to predict, and even more difficult to control. Meanwhile, the process that generates the data we use is extremely difficult to understand - the rules concerning the creation of AI are subject to dynamic change, and given how little we actually understand AI itself, it’s very difficult to say anything meaningful about it. It’s possible that researching AI safety is very effective and saves trillions of lives, but it’s also possible it does next to nothing and saves 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 persons in total. And given that the process of AI creation is extremely difficult to predict and understand, using any money on it might have incredible unforseen consequences, like ensuring that only the most destructive ones come to pass.
Philosopher Nick Bostrom calls this kind of situation “Pascal’s Mugging”, where someone asks for money over preventing an incredibly unlikely thing that will destroy mankind, and you either take the shot or risk a very slim chance of human extinction. It’s hard to understand why longtermist EAs care about actually substantial issues instead of being “people who talked themselves out of giving money to poor people and into giving money to software engineers.”
It also seems that longtermism calls for a sort of extreme conservative pessismism I'm not especially convinced about: in order for lives in 10⁴⁵ years being as important as current one, there's an implicit claim that well-being won't substantially increase over time. But if it will (as it has since 1870), then it's likely that most of our current problems are solved by then - meaning that the current poor, not future digital consciousnesses in a simulation powered by cold fusion, are the worst off people we can help. Of course, nonidentity is an issue, but it's a controversial topic for philosophers even (if it wasn't, abortion would be considered wholly unacceptable).
Plus, long-termism calls for an inherently conservative mindset: to preserve the future of mankind, we must operate over that future directly, but not in ways that alter the present such that the future might change. If you consider, for example, that immigration carries risks to culture (it does not), then no reforms must be undertaken. Pretty much all changes, social, economic, or political, are verboten in the face of the potential disruptions to the post-scarcity future of the long-termists.
Conclusion
I think that the EA movement, accepting its philosophical underpinnings (which you don’t necessarily have to, since it’s a free world) is somewhat solid in its commitment to alleviating suffering. But I also think that it’s ignoring factors that, ironically enough, would make it very ineffectual - especially the political constraints of operating in real world environments, the real complexities of ethical decisionmaking, and the innate ambiguity and incompleteness of empirical evidence on any undertakings.
Regardless, I think the core EA mission is pretty robust: making life better. To quote Derek Parfit:
One thing that greatly matters is the failure of we rich people to prevent, as we so easily could, much of the suffering and many of the early deaths of the poorest people in the world. The money that we spend on an evening’s entertainment might instead save some poor person from death, blindness, or chronic and severe pain. If we believe that, in our treatment of these poorest people, we are not acting wrongly, we are like those who believed that they were justified in having slaves.
Some of us ask how much of our wealth we rich people ought to give to these poorest people. But that question wrongly assumes that our wealth is ours to give. This wealth is legally ours. But these poorest people have much stronger moral claims to some of this wealth. We ought to transfer to these people, in ways that I mention in a note, at least ten per cent of what we earn.
Both Peter Singer’s website and the organization GiveWell have researched which organizations are most effective at helping recipients of aid, and created a list of them. So if you can, please help any of their top-rated charities - it does make a difference for people who really need the help.
Actually there’s this guy who argued that no moral framework gave equal weight to both claims to life (which is actually relevant in certain cases), so instead you should flip a coin to decide which side is worthy of life. This did not go over well.
Whether or not animals matter as much as humans, and whether wild animals specifically matter at all, are not widely agreed upon issues in society at large.
You can read more about the history of EA, and find out that their lead philosopher used to be a nude model and a (male) party stripper, on this New Yorker piece on him. No corroborating material has been found, sadly.
Great article. I do, however, have some significant disagreements - I'm pretty much on board with the mainstream, EA orthodoxy. Beginning with your worry about nukes, I agree that nukes are a collective action problem - that's one reason why it would be a bad idea to unilaterally disarm. But we can reduce nuclear risks without doing that. Some ways to reduce nuclear risks include shrinking the nuclear stockpile, adding more safety, etc. A lot of past nuclear risks have come from accidents - for example, there was a time when we dropped a nuke in one of the Carolinas and 5/6ths of the switches malfunctioned - if the last one had, it would have nuked a bunch of the state.
On the AI stuff, I broadly agree that AI won't objectionably displace jobs. But I think that, while there's some uncertainty about AI, we have decent ways of knowing things about AI. For example, my credence in AGI in the next century is quite high - based both on expert projections and the most sophisticated report, conducted by Katja Grace, concluding that. We also know that, if AI is much smarter than us, it has a high chance of being dangerous, especially if we can't control it. So that makes it prudent to try to control it. (There's obviously much more to be said about AI, but I don't want my comment to be 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 words :) ).
I'm not sure why one has to deny that things will get better to be a longtermist. One of the reasons I am a longtermist is because I think things will get better - and I want there to be a lot of people living that better life.
Infinite Sets are a source of interesting paradoxes. Some of the questions have puzzled mathematicians for as long as they have been thinking about them.
"What is larger," wondered Galileo Galilei, (definitely not the first person to wonder about such things), in _Two New Sciences_, published in 1638, "the set of all positive numbers (1,2,3,4 ...) or the set of all positive squares (1,4,9,16 ...)?"
For some people the answer is obvious. The set of all squares are contained in the set of all numbers, therefore the set of all numbers must be larger. But others reason that because every number is the
root of some square, the set of all numbers is equal to the set of all squares. See 'Galileo's Paradox' https://en.wikipedia.org/wiki/Galileo%27s_paradox .
Galileo concluded that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is also infinite; neither is the number of squares less than the totality of all the
numbers, nor the latter greater than the former; and finally the attributes "equal," "greater," and "less," are not applicable to infinite, but only to finite, quantities.
We haven't made a lot of improvement to these conclusions since Galileo's time. The major enhancement was done by Georg Cantor which has at its subject the 'cardinality' of sets. For finite sets,
the cardinality is the number of items in the set. You just count them -- what you get is what you get. This is problematic when you know that the set is finite, but very large, as in 'you will be dead before you count off the last number'. Better hope there is a shortcut to working things out. You don't have to count out 'the set of all positive integers less than 2**10000' to find out how many there are. But the set of all primes less than the same?'
Infinite sets also have a cardinality. (This is what Cantor worked on.) A substack reply is definitely not the place to go about discussing Cantor's groundbreaking work on the cardinality of different infinite sets,
but if you are interested, look up 'the cardinality of the continuum' and 'the cardinality of infinite sets'
see: https://en.wikipedia.org/wiki/Cardinality_of_the_continuum (more to figure out what to start
looking for.) There are webpages about this to suit every level of mathematical sophistication, from the grade school student who only learned what a set was last week, to mathematical post docs working in the field.
It's fun to think about. But I digress. The bottom line is that we know a good number of properties of certain infinite sets. The smallest sort of infinite set -- and we have a proof that this is the smallest, too -- is the countable infinite set, which has a cardinality called 'aleph-null' in the jargon. The set of all natural numbers is countable. So is the set of all even numbers. Or all squares. Or all multiples of 5. There are other sorts of infinite sets -- the set of all real numbers being an example -- which aren't countable. Their cardinality isn't aleph-null. And _sets with the same cardinality have the same size_. If you know that two infinite sets have the same cardinality, then you know they have the same size, even if you don't know what it is.
Which brings us to the utilitarian's favorite hobby horse -- trolley problems. Each person tied to the tracks represents an infinite set of possibilities of things that that person could do, in the future, if
the train doesn't run over them. So, while we may not know what the cardinality of the set of set of missed choices _is_, that a person about to be run over by a train _is_, but in the sort of hand-waving, thought-experiment way we think about such things, it seems reasonable enough to assume, for the purpose of a thought experiment, that all people have the same cardinality, which is in some way a measure of their utility (in the utilitarian sense).
Note: I am most definitely not saying that I can prove any of this. This is 'Cheers! Here is a beer, let us sit down and amuse ourselves thinking about math and philosophy' time. But modelling people mathematically as infinite sets of possibilities with the same cardinality seems a fair approach. And with it, things do not look very good for the Utilitarians in the philosopher's pub. When they reason that killing 5 people is 5 times as bad as killing only 1, they are reasoning in precisely the same way as the people who believe that the set of all numbers must be 5 times larger than the set of all powers of five.
Thus, the good mathematical argument is thus with the virtue ethicists and the deontologists, who have been saying, all along, that the badness of murdering people by running trains over them is not something that you can calculate.