Thursday, 26 November 2020

DEATH BY ALGORITHM: WHY WE SHOULD LEARN TO LIVE, NOT LEARN TO DIE

Letting algorithms decide how to make planet Earth a better place, may not always be a good idea, in particular when they are allowed to decide about life and death. Have we already arrived in a dystopian digital world?

With the pandemic gaining traction, a question absent for decades suddenly re-enters the world stage: triage and the question of who should die first, if capacities are not enough for everyone. This reminds one of some of the darkest chapters in human history. In fact, people have been working on the subject of computer-based euthansia already for some time. Such questions emerged long before the Corona virus pandemic – due to humanity’s overconsumption of resources.

“Learning to Die in the Anthropocene” is the title of Roy Scranton’s bestseller published in 2015. The Anthropocene, the age in which mankind shapes the fate of planet Earth, comes with existential threats, as reflected, for example, by the UN Agenda 2030. We seem to be stuck in a dilemma between continuing our beloved everyday habits of an exploitative life – and knowing that we should change our behavior. So, what would be more obvious than asking Artificial Intelligence to fix the world?

Maybe we should think twice. The question is dealt with by a number of daredevil science fiction novels such as Frank Schätzing's 'Tyranny of the Butterfly', which ‘solve’ the sustainable development problem in a cruel way – in order to question such solutions.

But how far from reality are these fiction worlds? Terms such as “depopulation” and “eugenics” have been circulating in think tanks and workshops around the world for quite some time.

Is dystopia already here, given that AI is helping to triage Corona virus patients? Are we now confronted with the “trolley problem” and need to make tough decisions, as some people suggest?

The “trolley problem” is a so-called moral dilemma that has often been discussed in connection with autonomous cars. It has been suggested that it’s about saving lives, but in fact it asks the question: “if not everyone can survive, who has to die?”.

If one does nothing, several people will be run over by a trolley – or car. If one interferes, however, fewer people will die – but some people will be actively killed. Today's jurisdiction prohibits this, also because there would otherwise be circumstances enabling one to murder people as collateral.

“Lesser evils” are still evils. Once our society starts to find them acceptable, one can knock down every foundational principle of our constitution – including the right to live. Suddenly, shocking questions appear to be acceptable such as: “If an autonomous car cannot brake quickly enough – should it kill a grandmother or an unemployed person?” Such kinds of questions have been recently asked within the so-called “moral machine experiment”. By now, however, it has been judged that such experiments are not a suitable basis for policy making. People would anyway prefer an algorithm that is fair. Potentially this would mean to take random decisions.

Of course, we do not want to suggest that people should be randomly killed – or killed at all. This would in grave contradiction of human dignity, even if it was a painless death. Our thought experiment, however, suggests that we should make a greater effort to change the world.

We should not accept the trolley problem as a given reality. If it produces unacceptable solutions, we should change the setting, e.g. drive more slowly or equip cars with better brakes and other safety technology. Coming back to planet Earth – the sustainability problem would not have to be there. It is our current way of doing business, our economic organization, today’s mobility concept and conventional supply chain management, which are the problems. Why don’t we have a circular and sharing economy yet – 50 years after the “Limits to Growth” study? This is the question we should ask. Why haven’t we been better prepared for a global pandemic, if it was predicted to happen?

Big Data, Artificial Intelligence and digital technologies have prepared us surprisingly little for the challenges we are currently faced with, be it “climate emergency” or “Corona emergency”, migration or terror. And it has a reason: While it sounds good “to optimize the world” in a data-driven way – optimization is based on a one-dimensional goal function, mapping the complexity of the world to a single index. This cannot be appropriate, and it does not work well. It largely neglects the potentials of immaterial network effects and underestimates human problem-solving capacity as well as the world’s carrying capacity.

Nature, in contrast, does not optimize. It co-evolves, and is doing much better, for example, in terms of sustainability and circular supply networks. Our economy and society could certainly benefit a lot from bio-inspired, eco-system kinds of solutions, particularly symbiotic ones.

In challenging times like these, it is important to organize and manage the world in a resilient way. This is the best insurance not to end up with problems like triage. We need to be able to flexibly adapt to surprises and recover from shocks such as disasters and crises. In these troubled times, instead of “learning to die”, we should “learn to live”. Resilience can in fact be increased by a number of measures, including redundancies, diverse solutions, decentralized organization, participatory approaches, solidarity, and digital assistance – solutions that should be locally sustainable for extended periods of time.


Dirk Helbing, Professor of Computational Social Science, ETH Zürich, Switzerland (dirk.helbing@gess.ethz.ch), Link to Google Scholar

Peter Seele, Professor of Business Ethics at USI Lugano, Switzerland (peter.seele@usi.ch), Link to Google Scholar

An edited version of this contribution has been published as OpEd in Project Syndicate

Link here



Thursday, 5 November 2020

HOW ANTICIPATION CAN END UP BEING UNETHICAL, IMMORAL OR IRRESPONSIBLE

Suppose, we tried to anticipate the future of plant Earth: What could possibly go wrong?

Imagine one day, models of the future of Earth would predict the collapse of economy and civilization, and a dramatic drop in population, e.g. due to anticipated resource shortages. What might be the issues of such an „apocalyptic“ forecast?

First of all, remember that – even though some are useful – “all models are wrong”. This may lead to wrong conclusions and actions, which could cause harm. (I expect such a situation, for example, for current “world simulation” approaches. These neglect important innovation, interaction, context and network effects, particularly symbiotic ones, which could considerably increase the carrying capacity compared to current estimates.)

Think tanks may start discussing the consequences of an “apocalyptic scenario”, and propose what to do about it. First of all, they might conclude that resources would not be enough for everyone, and hence access to them would have to be surveilled, centrally controlled, and prioritized, say, by means of some kind of citizen score. Therefore, it seems that emergencies would require and allow one to overrule human rights. One might argue that democracies would have to be overhauled and replaced by a global technocracy that manages people like things.

One might argue that one would have to act according to the principle of the “smaller evil”, as exemplified by the “trolley problem”. According to this, any regulation, law or constitutional principle could be teared down for some supposedly overarching principle (such as “global health”). This could even touch the right to live, which might be overruled by triage decisions.

Now, suppose such considerations would lack transparency. They would then be discussed mostly by insiders, but not by parliaments, the science community or general public at large “due to the sensitivity of the issue”.

Then, these insiders may start working on their own solutions of the problem, without democratic legitimacy, and turn problems of life and death into profitable business models.

From then on, these problems would be mainly seen from the perspective of profit maximization. The bigger the problem or the greater the emergency, the more profitable things would get…

The excess deaths would be handled by the triage principle, and saving lives would probably not be a priority anymore.

This is how anticipating an ”apocalypse” can actually cause an apocalypse like a self-fulfilling prophecy, and overrule all and any ethical principle, even if factually such a scenario would not have to happen at all (which is what I think).

I would not be surprised if there were people smart enough to understand that, if one wanted to replace democracies by hierarchical, neofeudalistic systems, disasters and crises would be just the perfect means to accomplish the job.

In any case, to avoid that anticipation ends up with irresponsible, immoral or even criminal action or neglection, we need a suitable ethics of anticipation.

Anticipation, of course, does not have to be a bad thing. It can open up our minds for opportunities and risks. Such insights should be transparently and publicly evaluated and discussed.

Modeling complex systems, for example, has provided us with a better understanding of traffic jams, crowd disasters, and epidemic spreading. This can be used to reduce problems and risks. Such models can guide proactive measures that can avoid or mitigate potential trouble and harm.

In case of expected resource shortages, it is clear what needs to be done: reduce resource consumption, where possible, build reserves, increase capacity, improve resource efficiency, build a circular economy, share goods and services.

However, this is not all: When faced with uncertainty (which is the case, when probabilistic effects are coupled with network-related cascading effects), damage may hardly be predictable and large. In such cases, a resilient organization of society is in place, as we need to be able to flexibly adapt to surprises and recover from shocks such as disasters and crises.

Resilience can in fact be increased by a number of measures, including redundancies, diverse solutions, decentralized organization, participatory approaches, solidarity, and digital assistance – solutions that should be locally sustainable for extended periods of time.

Note that these solutions are very different from global surveillance, behavioral control and triage. They are rather in support of digital democracy, City Olympics, a socio-ecological finance system, and democratic capitalism. In other words, even if the analysis “there is trouble ahead” (assuming a lack of sustainability) were correct, the conclusions and measures should have been very different.