Thursday, 26 November 2020


Letting algorithms decide how to make planet Earth a better place, may not always be a good idea, in particular when they are allowed to decide about life and death. Have we already arrived in a dystopian digital world?

With the pandemic gaining traction, a question absent for decades suddenly re-enters the world stage: triage and the question of who should die first, if capacities are not enough for everyone. This reminds one of some of the darkest chapters in human history. In fact, people have been working on the subject of computer-based euthansia already for some time. Such questions emerged long before the Corona virus pandemic – due to humanity’s overconsumption of resources.

“Learning to Die in the Anthropocene” is the title of Roy Scranton’s bestseller published in 2015. The Anthropocene, the age in which mankind shapes the fate of planet Earth, comes with existential threats, as reflected, for example, by the UN Agenda 2030. We seem to be stuck in a dilemma between continuing our beloved everyday habits of an exploitative life – and knowing that we should change our behavior. So, what would be more obvious than asking Artificial Intelligence to fix the world?

Maybe we should think twice. The question is dealt with by a number of daredevil science fiction novels such as Frank Schätzing's 'Tyranny of the Butterfly', which ‘solve’ the sustainable development problem in a cruel way – in order to question such solutions.

But how far from reality are these fiction worlds? Terms such as “depopulation” and “eugenics” have been circulating in think tanks and workshops around the world for quite some time.

Is dystopia already here, given that AI is helping to triage Corona virus patients? Are we now confronted with the “trolley problem” and need to make tough decisions, as some people suggest?

The “trolley problem” is a so-called moral dilemma that has often been discussed in connection with autonomous cars. It has been suggested that it’s about saving lives, but in fact it asks the question: “if not everyone can survive, who has to die?”.

If one does nothing, several people will be run over by a trolley – or car. If one interferes, however, fewer people will die – but some people will be actively killed. Today's jurisdiction prohibits this, also because there would otherwise be circumstances enabling one to murder people as collateral.

“Lesser evils” are still evils. Once our society starts to find them acceptable, one can knock down every foundational principle of our constitution – including the right to live. Suddenly, shocking questions appear to be acceptable such as: “If an autonomous car cannot brake quickly enough – should it kill a grandmother or an unemployed person?” Such kinds of questions have been recently asked within the so-called “moral machine experiment”. By now, however, it has been judged that such experiments are not a suitable basis for policy making. People would anyway prefer an algorithm that is fair. Potentially this would mean to take random decisions.

Of course, we do not want to suggest that people should be randomly killed – or killed at all. This would in grave contradiction of human dignity, even if it was a painless death. Our thought experiment, however, suggests that we should make a greater effort to change the world.

We should not accept the trolley problem as a given reality. If it produces unacceptable solutions, we should change the setting, e.g. drive more slowly or equip cars with better brakes and other safety technology. Coming back to planet Earth – the sustainability problem would not have to be there. It is our current way of doing business, our economic organization, today’s mobility concept and conventional supply chain management, which are the problems. Why don’t we have a circular and sharing economy yet – 50 years after the “Limits to Growth” study? This is the question we should ask. Why haven’t we been better prepared for a global pandemic, if it was predicted to happen?

Big Data, Artificial Intelligence and digital technologies have prepared us surprisingly little for the challenges we are currently faced with, be it “climate emergency” or “Corona emergency”, migration or terror. And it has a reason: While it sounds good “to optimize the world” in a data-driven way – optimization is based on a one-dimensional goal function, mapping the complexity of the world to a single index. This cannot be appropriate, and it does not work well. It largely neglects the potentials of immaterial network effects and underestimates human problem-solving capacity as well as the world’s carrying capacity.

Nature, in contrast, does not optimize. It co-evolves, and is doing much better, for example, in terms of sustainability and circular supply networks. Our economy and society could certainly benefit a lot from bio-inspired, eco-system kinds of solutions, particularly symbiotic ones.

In challenging times like these, it is important to organize and manage the world in a resilient way. This is the best insurance not to end up with problems like triage. We need to be able to flexibly adapt to surprises and recover from shocks such as disasters and crises. In these troubled times, instead of “learning to die”, we should “learn to live”. Resilience can in fact be increased by a number of measures, including redundancies, diverse solutions, decentralized organization, participatory approaches, solidarity, and digital assistance – solutions that should be locally sustainable for extended periods of time.

Dirk Helbing, Professor of Computational Social Science, ETH Zürich, Switzerland (, Link to Google Scholar

Peter Seele, Professor of Business Ethics at USI Lugano, Switzerland (, Link to Google Scholar

An edited version of this contribution has been published as OpEd in Project Syndicate

Link here

No comments:

Post a comment

Note: only a member of this blog may post a comment.