Friday, 30 November 2018

IS THE “MORAL MACHINE” A TROJAN HORSE?

by Jan Nagler 1, 2 and Dirk Helbing 2,3,4

How should self-driving vehicles faced with ethical dilemmas decide? 

This question is shaking the very foundations of human rights.


In the “Moral Machine Experiment” (1), Awad et al. perform an international opinion poll on autonomous vehicles. While the authors emphasize not to blindly follow local or majority preferences, they highlight challenges that policymakers must be aware of if special groups of people are not given a special status. This may push politicians to follow popular votes, while car manufacturers already pay attention to opinion polls (2).

However, is a crowd-sourced ethics approach appropriate to decide, whether to prioritize children over elderly people, women over men, or athletes over overweight persons? Certainly not. The proposal overhauls the equality principle, on which many constitutions and the UN Charter of Human Rights are based.

While we acknowledge that laws have to be adapted and upgraded to account for emerging technologies, and that moral choices may be context-dependent, changing the most fundamental ethical principles underlying human dignity and human rights in order to more successfully market new technologies may result in a rapid erosion of the very basis of our societies.

Giving up the equality principle (as Citizen Scores do) could easily promote a new, digitally based feudalism. Moreover, in an unsustainable, “overpopulated” world, “moral machines” would be Trojan Horses: they would threaten more human lives than they would save. Autonomous AI systems (not necessarily cars or robots) would potentially introduce principles of hybrid warfare to our societies.

Instead of just managing moral dilemmas, we must undertake all reasonable efforts to reduce them. Therefore, we propose that autonomous and AI-based systems should conform with the principle of fairness, which suggests to randomize decisions, giving everyone the same weight. Any deviation from impartiality would imply advantages for a select group of people, which would undermine incentives to minimize risks for everyone.


(1) E. Awad et al., The Moral Machine experiment, Nature 562, 59-64 (2018)
(2) A. Maxmen, Self-driving car dilemmas reveal that moral choices are not universal, Nature 562, 469-470 (2018)


Affiliations:
  1. Frankfurt School of Finance and Management, Adickesallee 32-34, Frankfurt, Germany
  2. Computational Social Science, Department of Humanities, Social and Political Sciences, ETH Zurich, Clausiusstrasse 50, CH-8092 Zurich, Switzerland
  3. TU Delft, Faculty of Technology, Policy, and Management, The Netherlands

  4. Complexity Science Hub, Vienna, Austria



    E-mail addresses: j.nagler@fs.de; dhelbing@ethz.ch

Comment on Awad et al., The Moral Machine Experiment, Nature 562, 59-64 (2018);Link: https://www.nature.com/articles/s41586-018-0637-6

No comments:

Post a Comment

Note: only a member of this blog may post a comment.