Effective altruism has suffered a blow, but the extreme ideas surrounding it have infiltrated some of our most powerful organisations

Longtermism, transhumanism and other extreme ideas popular in Silicon Valley are infiltrating our political system

On 5 October, a month before the US presidential election, Elon Musk bounced onto the stage at a Donald Trump rally in Butler, Pennsylvania. Wearing a “Make America Great Again” hat and a T-shirt emblazoned with the logo “Occupy Mars”, the technology mogul leapt in the air, arms outstretched and midriff exposed, giving photographers the chance to catch his marionette-style pose in a now viral image.

Silicon Valley has long maintained an uneasy relationship with politics. Interactions between the technology capital of the world and Washington, DC have historically been frosty, limited to the occasional congressional hearing where CEOs offered robotic notes of regret to tech-bewildered politicians. During this year’s election campaigning, though, many of technology’s elite – including Musk and the venture capital billionaires Marc Andreessen and Ben Horowitz – threw their weight (and money) behind Trump’s authoritarian platform.

But there are many avenues to shape politics, some more obvious than others. Beneath the flurry of political activity, a new worldview centred around the future of technology is becoming increasingly influential. It is inspired by a confluence of techno-philosophies, including effective altruism and longtermism. Both were born in elite British universities and, supercharged by Silicon Valley billionaires, are now associated with increasingly radical and even apocalyptic visions of humanity’s future. These ideas are permeating the highest level of government and academia worldwide. It’s time that the general public learned more about them.

Taking an idea to extremes

Let’s start with the more familiar of these, effective altruism – a movement founded by two young men in the early 2000s. Australian moral philosopher Toby Ord, then a 30-year-old academic at Oxford University, had made headlines by pledging to give £1 million of his future earnings to charity. William MacAskill, a student at the time, volunteered to work with him, intrigued by his vision of using evidence and reason to establish the best way to help others. In 2009, the duo established what’s now known as the Centre for Effective Altruism (CEA) to further this approach and encourage others to join in.

Followers of effective altruism, often referred to as EA, are encouraged to think about two fundamental principles: “earn-to-give” and “expected value”. Earn-to-give encourages people to make vast sums of money to give away. Expected value determines exactly which causes deserve donations. It’s easy to calculate: multiply the value of an outcome by the probability of it occurring. In poker, if you multiply the cash in the pot by your chances of winning, you’ll get an expected value. If that value is positive, the bet is worth making.

The calculation becomes trickier when applied to more complex situations, so effective altruism has tended to focus on a quantifiable outcome: saving lives. For example, the community strongly promotes charities tackling malaria in Africa with mosquito nets and tablets, which are cheap, effective ways to reduce death rates.

Sceptics soon pointed out that this way of giving ignores the many complexities of aid, skimming over unintended consequences – for example, recipients of mosquito nets in Zambia began using them to catch fish, threatening supplies across Africa – while also ignoring quality of life.

Despite its critics, the promise of making a powerful impact on the world appealed to Silicon Valley’s wealthy, solution-focused crowd. Dustin Moskovitz, one of the founders of Facebook, became an early adopter. In 2012, his charitable organisation Good Ventures partnered with the EA-focused GiveWell, which ranked nonprofits using EA values. This collaboration led to a spin-off group called Open Philanthropy, through which Moskovitz and his wife have donated over $1 billion to causes linked to the effective altruism community.

Meanwhile, MacAskill and others continued to promote the philosophy on university campuses around the world. In 2012, he met an undergraduate physics major at MIT interested in animal welfare. MacAskill persuaded the student to embark on a high-earning career and donate money, rather than dedicate his life to a specific cause.

That student was Sam Bankman-Fried, who went on to build the FTX cryptocurrency exchange, amassing a fortune which peaked at an estimated $26.5 billion. True to his word, he donated considerable sums to EA and associated causes. But in November 2022, FTX declared bankruptcy, and Bankman-Fried was indicted on criminal charges, including wire fraud and money laundering. In March, he was sentenced to 25 years in prison and ordered to surrender $11 billion.

Bankman-Fried’s demise highlighted the pitfalls of expected value: in his case, the means that justified the ends were illegal. His fall from grace has tarnished the reputation of effective altruism, but the majority of the community is still not seen as particularly controversial. However, a new branch of EA, evangelising the same philosophical ideals writ large, is now gaining influence among the world’s wealthiest individuals, and has become embedded in some of the biggest companies on the planet, with some critics seeing it as potentially more dangerous.

Apocalyptic visions

Longtermism, which grew and prospered alongside EA, states that we are ethically compelled to address the long-term issues of the human race as a critical priority. Two threats, its followers contend, could potentially wipe out the species: an artificial intelligence misaligned with human values, and an engineered pandemic. Using the EA mantra of expected value, it can be argued that addressing these concerns overrides any philanthropic endeavour helping people who are alive today. At the same time, some longtermists envision a future where trillions of people could live in a near-perfect state of well-being. It’s difficult to imagine a scenario that provides as much “value” as that.

These ideas may sound extreme, but armed with the billions of people like Moskovitz and Bankman-Fried, they have gained significant ground in academia. “There’s been very deliberate field building within academia, spending money and building research centres at Oxford, Cambridge, NYU, Berkeley, Austin [and] Harvard,” says David Thorstad, senior research affiliate at the Global Priorities Institute, Oxford, adding that EA had targeted philosophy and enjoyed success because there were so few other investors in the field. “EA started with the idea of doing good effectively, a very conventional place,” Thorstad says. Then “the longtermists came in … [and] gained a lot of power and control.”

In an emailed statement, the Centre for Effective Altruism told me that longtermism and EA are best viewed as separate philosophies. Some effective altruists do not subscribe to the longtermist view, and others do so to differing extent. But they are undeniably closely linked. MacAskill and Ord, the two EA founders, have published books endorsing the longtermist view, while EA-focused donors like Open Philanthropy have poured money into academic organisations promoting longtermist agendas.

That influence has now spread from academia into private companies and government. Thus far, this has mostly been evident in work around AI. Most longtermists believe the invention of artificial general intelligence (AGI), where an AI’s capabilities would match or exceed that of a human, will be pivotal to the future of the species, but they see the outcome as binary. In their view, AGI will either usher in a utopian age for humanity or it will destroy us entirely. For that reason, they do not demand that AI research be abandoned altogether, but some call for a pause in development. Mega-valued companies such as OpenAI ($80 billion), x.AI ($24 billion), and Anthropic ($18.4 billion), which all have strong ties to the EA and longtermist movements, believe the key to ensuring the safe evolution of the technology is – conveniently – that they undertake the development themselves.

Eugenics and transhumanism

Some experts even see these ideas as part of a bundle of related philosophies they’ve called TESCREAL. This term encompasses transhumanism (humans should augment themselves via technology), extropianism (we are destined to become immortal), singularitarianism (the belief that technology will escape our control), cosmism (humans should aim for digital immortality and spread throughout the universe; see more on page 34), rationalism, effective altruism and longtermism. Rationalism in this case relates to a worldview that emerged from the blog LessWrong, a community forum concerned with issues like AI and EA in the late 2000s. This type of rationalism, sometimes called “applied rationality”, warns against the dangers of cognitive bias and aims to apply science-based thinking to complex fields and personal decision-making.

The term TESCREAL was coined in a paper published in April 2024 by philosopher Emile P. Torres, a former longtermist, and renowned computer scientist Timnit Gebru. The paper not only links these viewpoints together, but claims that they emerged in the order of the acronym. “These ideologies, which are direct descendants of first-wave eugenics, emerged in roughly this order, and many were shaped or founded by the same individuals,” the researchers wrote.

There do appear to be links between the TESCREAL philosophies. Nick Bostrom, a well-known effective altruist and founder of the Future of Humanity Institute at Oxford University, is also considered by some to be the father of longtermism. His book Superintelligence debates the existential risk posed by AI, and he has consistently been one of the loudest voices claiming that misaligned AI could kill everyone on the planet. The Swedish philosopher also co-founded the World Transhumanist Organisation in 1998 to promote the belief that humanity should use technology to augment our abilities, extend our lives and merge with tech to transcend the limitations of our species.

Elon Musk is another notable transhumanist. His latest venture, Neuralink, aims to enable direct brain-computer interfacing through implantable chips and has already
begun testing in humans (learn more about this on page 42). Musk has also aligned himself with longtermist beliefs. In 2022, MacAskill published What We Owe the Future, a best-selling book laying out the longtermist vision in detail, to widespread mainstream attention. Stephen Fry described the book as “a miracle”; MacAskill appeared on The Daily Show with Trevor Noah; and Musk described it as “a close match to my philosophy”.

Sam Altman, the founder of OpenAI, has also expressed support and given money to projects that fall within the TESCREAL bundle, although he has sought to distance himself from some of the ideas on the spectrum. For example, Altman has invested in a company seeking to extend human life by 10 years. He says that “augmenting humans is a very good thing to do”, but also that he is not a transhumanist − which might seem contradictory.

The claim that all TESCREAL beliefs are rooted in eugenics is more worrying – and also more difficult to prove. Historically, there has been some connection. Julian Huxley (brother of the author Aldous), who was the president of the British Eugenics Society between 1959 and 1962, first popularised the ideas of transhumanism in the 1950s. To improve the abilities of humans, some transhumanists have advocated selective breeding, and more recently the use of technologies such as gene editing. But none of the major figures in effective altruism or longtermism today would openly claim to be a eugenicist.

That said, there is an obsession with IQ and intelligence among the community that suggests that some are at least flirting with eugenicist ideas. In 2023, a former EA member reported that the Centre for Effective Altruism had tested a new way of measuring the value of people using a metric called Potential Expected Long-Term Instrumental Value (PELTIV). People with IQs of less than 120 would have PELTIV points deducted, while those working for EA organisations or on AI would rank higher.

Meanwhile, some have accused the community of having ties to scientific racism − although these accusations are highly speculative. Just a month after Bankman-Fried’s arrest, Torres unearthed an email from 1996, in which Bostrom wrote under the declaration “blacks are more stupid than whites” that “I like that sentence and think it’s true.” Bostrom apologised, but the statement caused a scandal. Oxford shut down his Future of Humanity Institute in April this year, which it said was based on a review of “the best structures for conducting our academic research”.

Recruiting the young, gaining power

Following successive scandals, and given the extreme beliefs of some of their adherents, why do the effective altruism and longtermist movements remain so influential? Some believe cult-like elements contribute to their success. “[Supporters] have their own language; they’re encouraged to only take career advice from their organisations, to only give to charities that are approved by EA organisations,” Torres says. “EA goes to college campuses to recruit young people to steer them into jobs and earn-to-give career paths, to make money to donate back at least 10 per cent to the EA community and to further build, i.e. evangelise.”

Torres believes that some of these ideas give the ultra-rich an excuse to act selfishly. “I think the longtermist worldview naturally appeals to billionaires. Because it says not only are you excused from caring about global poverty, but you’re morally a better person for ignoring it and focusing on the trillions and trillions and trillions of future people,” Torres says. The philosophy also has the potential to increase inequality, and take vital decision-making away from democratically elected governments, putting it instead into the hands of mega-rich philanthropists.

In the United States, EA has already moved from Silicon Valley to Washington, DC where followers have placed themselves in positions of power in federal agencies and major think-tanks. For example, the RAND Corporation received $15 million in grants from Open Philanthropy last year and is led by CEO Jason Matheny, an effective altruist. The policy think-tank is influential and played a key role in drafting an executive order on AI signed by Joe Biden in 2023. Meanwhile, Open Philanthropy funded the salaries of more than a dozen AI fellows in senior political positions the same year, flooding Washington in what one biosecurity researcher told Politico was an “epic infiltration”. Given that Donald Trump regained the presidency this year with the backing of deep-pocketed, TESCREAL-linked technology moguls, the influence may stand to grow
considerably.

Although effective altruism was born in Oxford, its popularity in Silicon Valley is no surprise. EA, longtermism and other beliefs of the TESCREAL bundle all take complex and unpredictable possibilities and attempt to distill them down to binary outcomes. Much is lost in the process. But with the backing of influential institutions, and some of the wealthiest people on the planet, this embryonic branch of philosophy may well have a significant say in the planet’s future.

This article is from New Humanist’s winter 2024 issue. Subscribe now.