Stories At the Well blog 22.08.2025 Innovating the End of the World Machine dreams hold a special vertigo. Text and image: Aapo Nikkanen Aapo Nikkanen is an artist, researcher, and hypnotherapist who uses hypnosis as a tool and a medium for performances and photography. His work addresses the development of new technologies, their asymmetrical power concentration, and their political, ecological, and psychological impacts. His latest hypnotic performance – Joy Machines – premiers in New Performance Turku Biennale, 3 – 7.9.2025. Émile P. Torres is a philosopher, intellectual historian, and activist. Their research focuses on eschatology, existential risk, and human extinction. Torres hosts the podcast Dystopia Now with comedian Kate Willett, and writes actively both scholarly papers and popular media articles. Tags AI, artificial intelligence, philosophy, saari residence, technology, transhumanism Share: While mainstream coverage touts the promises and risks of AI, the ideologies behind the new technologies might pose a greater existential risk than the machines ever could. As a part of his artistic enquiry, Aapo Nikkanen discusses with philosopher Émile P. Torres about tech’s right-turn, and explores how a lake in Finland might soon be playing a role in an armed conflict. For the past 15 years, Émile P. Torres (they/them) has been studying existential risks. Today, they are one of the most prolific voices researching and criticising the controversial ideologies driving the big tech. Together with renowned computer scientist Timnit Gebru, they coined TESCREAL — Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism – a bundle of techno-futuristic isms that have become deeply influential in Silicon Valley. The ideological roots of tech-fascism At the heart of this mindset is a libertarian version of transhumanism, a techno-utopian belief that humanity can – and should be – radically enhanced through science and technology. “It’s a techno-utopian vision of the future whereby we radically re-engineer ourselves to become posthuman. It is a version of eugenics. This is not debated,” Torres explains. “The idea of transhumanism was initially developed by leading eugenicists such as Julian Huxley, J.B.S. Haldane, and J.P. Bernal.” Transhumanism first emerged in the early 20th century, but only really took off in the 1990s when it was redefined by a new wave of futurists centered around Nick Bostrom, Max More, and thinkers associated with the Extropy Institute and Oxford’s Future of Humanity Institute. Transhumanism positioned itself as a secular faith for the Silicon Valley elite, but as venture capital poured in, the rhetoric began to darken. As Torres and others have argued, the underlying logic of the TESCREAL ideologies recalls fascist structures: rigid hierarchies, techno-solutionism, control over futures, and a belief in a superior posthuman class. “Transhumanism is the ultimate form of eugenics,” Torres cautions, “it’s not just about one group of humans dominating another. It’s about creating an entirely new species that can dominate us all.” As the movement grew, many of its ideological fathers became millionaires or billionaires – and openly embraced racist views. Nick Bostrom’s research unit in Oxford was shut down because of his racist slurs, while Curtis Yarvin – another influential figure that has the ear of the likes of JD Vance, Peter Thiel, and Elon Musk – has developed a neoreactionary political philosophy that calls the end of democracy in favor of tech-backed authoritarianism. What began as a niche techno-philosophical movement, has grown into a network of power in which ideology, capital, and politics intertwine. The system – often described as tech-fascism – runs with the money of tech billionaires such as Peter Thiel and Elon Musk, who bankroll both the futurists shaping the ideas and reactionary politicians such as JD Vance and Donald Trump. The ethics of extinction Central to both fascist and neoliberal thought is the appeal to “greater good”: harm is reframed as necessary or even noble. In tech-fascism, the greater good is redefined through longtermism – the “L” in TESCREAL – which argues that the welfare of future humans counts as much as or more than the lives of people today, and that our efforts should focus on averting extinction-level threats. Torres’ impression is that longtermism was probably not designed intentionally to justify the accumulation of wealth and the concentration of power, but that it is undeniably used that way. “Longtermism in particular tells tech billionaires, not only that, they’re morally excused from helping the global poor, but that they’re actually morally better persons for focusing on the far future”. This moral logic fits neatly into the pursuit of Artificial General Intelligence (AGI) — an AI with human-like cognitive abilities — sold as the key to solving planetary problems and creating abundance, with any harm today dismissed as insignificant when compared to the supposed future gains. In reality, nearly all experts deem AGI unlikely to happen anytime soon, and many don’t believe it to be possible even in principle. Meanwhile, the AI boom has generated immense amounts of wealth for a few, while imposing significant harm on many. It is now inseparably tied to the climate crisis, neocolonialism, and authoritarian politics. UN research found that AI has fuelled a 150% rise in Big Tech’s carbon emissions — part of a wider toll that includes worker exploitation in the Global South, colossal freshwater use, IP theft and algorithmic bias. Data centres as instruments of power We are encouraged to think of AI as a harmless tool, but AI is not one thing. It can be useful – I used an AI assistant to proofread this piece – but it is also a political and ethical object that carries, and enacts, oppression. Its most visible form is the data centre, sold as a neutral “high-tech investment”, while in practice it functions as the hardware of an ideology that turns public resources into centralised power. The hard question is whether we can separate the tool from the system, when technocrats own the infrastructure, the platforms and the profits. In my hometown of Kirkkonummi, Finland, a Microsoft data centre the size of 70 football fields was recently approved. The decision was followed by outrage when €660m in tax breaks for data centres and mines – most for data centres – surfaced in the same year the government cut €3bn from health, social care and culture. The benefits were said to be dropped, but have since been quietly reintroduced to the bill. The construction has already begun, although many questions linger. One of them is the freshwater use, which in Kirkkonummi comes from Lake Meiko, an ecologically rare and valuable nature reserve. Microsoft says the site will use 8.6 million litres a year, a figure that ignores hotter summers and leans on not-yet-deployed tech. In the Netherlands, the company projected 12–20 million and used 84 million. In Finland, the facility is planned to be a third larger. The question is, however, not only how much resources are consumed, but what they enable. Today’s data centres are not passive utilities; they are instruments of power, translating subsidies and resources to serve an ideology. In August we learned that Microsoft’s Dutch servers were running Israel’s surveillance network in Palestine, technology directly deployed in the Gaza genocide. It’s a disturbing thought, water flowing from our lake to feed a machine of oppression half a world away. More about Aapo Nikkanen:Joy Machines in New Performance Turku Biennale, 3 – 7.9.2025. https://www.newperformance.fi/aapo-nikkanen-fi-fra-2/www.instagram.com/aaponewbornwww.aapoaapo.com More about Émile P. Torres:https://www.xriskology.com/Dystopia Now PodcastApple podcasts: https://podcasts.apple.com/fr/podcast/dystopia-now/id1794217765Spotify: https://open.spotify.com/show/1buHPmi0O4OHKI4vp1Kmpn