The definition of effective altruism
William MacAskill proposes a definition of effective altruism (EA). I think having a definition is useful. It could allow effective altruists (and their critics) to have better, clearer conversations, and to avoid misconceptions.
In MacAskill’s quote below, I have emphasised in bold some notable features of the definition.
As I and the Centre for Effective Altruism define it, effective altruism is the project of using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.
On this definition, effective altruism is an intellectual and practical project rather than a normative claim, in the same way that science is an intellectual and practical project rather than a body of any particular normative and empirical claims. Its aims are welfarist, impartial, and maximising: effective altruists aim to maximise the wellbeing of all, where (on some interpretation) everyone counts for one, and no-one for more than one. But it is not a mere restatement of consequentialism: it does not claim that one is always obligated to maximise the good, impartially considered, with no room for one’s personal projects; and it does not claim that one is permitted to violate side-constraints for the greater good.
Effective altruism is an idea with a community built around it. That community champions certain values that aren’t part of the definition of effective altruism per se. These include serious commitment to benefiting others, with many members of the community pledging to donate at least 10% of their income to charity; scientific mindset, and willingness to change one’s mind in light of new evidence or argument; openness to many different cause-areas, such as extreme poverty, farm animal welfare, and risks of human extinction; integrity, with a strong commitment to honesty and transparency; and a collaborative spirit, with an unusual level of cooperation between people with different moral projects.
“Effective Altruism: Introduction”, Essays in Philosophy: Vol. 18: Iss. 1, Article 1, doi:10.7710/1526-0569.1580
In what follows, I quote or paraphrase extensively from a presentation given by William MacAskill at the Oxford workshop on the philosophical foundations of EA, 2017.
There are a number of common misconceptions about EA.
- Misconception #1: EA is just about poverty. This is misguided both in principle and in practice. In principle, EA is open to any cause. In practice, different EAs support different causes, including animal suffering reduction, existential risk mitigation, criminal justice reform, science and tech progress, and more.
- Misconception #2: EA is just utilitarianism or consequentialism. EA is a project, not a normative claim. Any normative claims would have to be about someone’s obligations to engage in that project. EA doesn’t require doing the most good possible with all your resources; EA doesn’t condone rights violations. Utilitarianism might entail effective altruism, but so do many other moral views
- Misconception #3: EA neglects systemic change. EA supports systemic change in principle and in practice.
- The Open Philanthropy Project is funding projects in immigration reform, criminal justice reform, and macroeconomic policy.
- One of GiveWell’s main goals from the beginning, perhaps its primary goal, has been to change the cultural norms within the non-profit sector, and the standards by which they are judged by donors.
- Giving What We Can representatives have met with people in the UK government about options for improving aid effectiveness. One of its first and most popular content pages debunks myths people cite when opposing development aid. One of the first things MacAskill wrote when employed by Giving What We Can was on the appropriate use of discounts rates by governments delivering health services. Rachel Glennerster, a self-identified effective altruist, is currently the chief economist of DfID.
- Some 80,000 Hours alumni are going into politics, think-tanks, setting up a labour mobility organisations or businesses that facilitate remittance flows to the developing world.
- Several organisations focussed on existential risk (e.g. the Future of Humanity Institute, the Centre for the Study of Existential Risk and the Future of Life Institute) take a big interest in government policies, especially those around the regulation of new technologies, or institutions that can improve inter-state cooperation and prevent conflict.
- Many effective altruists work on or donate to lobbying efforts to improve animal welfare regulation, for example with the Humane Society of the United States. Other activists are working for dramatic society-wide changes in how society views the moral importance of non-human animals.
- Misconception #4: EA is mainly about earning to give. According to 80,000 Hours, only ~15% of EAs should earn to give. At the most recent EA Global conference, only 10% of attendees were planning to earn to give long-term. (Rather than, for example, doing so temporarily as a means of building skills.)
Why this definition
MacAskill considers the following desiderata for a definition of EA:
- Stated views on the definition by EA leaders
- Faithfulness to the actual practice of those in the effective altruism community
- Philosophical justification
- Ecumenism with respect to different moral views
- Practical value of public concept: how valuable it is to have the concept, so defined, discussed in the public sphere.
And he notes that different definitions proposed so far have varied insofar as they sometimes do, and sometimes do not:
- Build in some theory of value
- Include a prohibition against violating side-constraints
- Include a sacrifice component
- State the view as a normative claim
Within this framework, we may note that in MacAskill’s definition:
- EA is an intellectual and practical project, not a normative claim
- According to a survey of EA leaders, a large majority believe the definition should not be a normative claim and should not include a sacrifice component.
- The definition is compatible with practising EA being supererogatory, and with the view that there are no normative claims at all. (Both are common views among EAs).
- Most EAs want to get on with the project of figuring out how to maximise welfare, rather than asking how much is required of one.
- A project is more appealing as a public concept than a normative claim.
- Any normative claim risks being inflexible. By analogy: it’s a good thing that science was not defined as some specific empirical claims believed by Galileo or Bacon.
- This project’s aims are welfarist, impartial, and maximising. (Some values should be immediately emphasised, but aren’t part of the definition, such as: cause-neutrality, epistemic humility, good conduct (including respect for rights and co-operation), moral commitment, excitement.)
- To be philosophically well-supported, it can’t just be ‘doing good’, given whatever conception of the good. What about Neonazis?
- A majority of EA leaders surveyed believe the definition should include welfarism and the equal consideration of interests.
- Compatibility with current practice:
- All current projects within the effective altruism community are focused on promoting welfare.
- Most members of the community state that they identify as, or are sympathetic to, utilitarianism.
- Ecumenism: Promoting welfare (within constraints) is at least permissible on almost all moral views (given the way the world is) and is very important on many moral views.
- This definition avoids EA losing all of its value by becoming so diluted as to be meaningless, or falling into relativism.
- A prohibition against violating side-constraints is not included in the definition
- EA leaders in the survey were evenly split on whether the definition should include respect for common-sense ethical prohibitions.
- Science is the use of evidence and reason to discover truths. It would seem strange to include a “without killing anyone” clause, even though it’s true.
- Defining EA as a project, not a normative claim, escapes ‘the end justifies the means’ worries.