By Dirk Helbing (ETH Zurich/TU Delft/Complexity Science Hub Vienna)
It is finally happening! At the annual meeting of the Swiss Civil Society Association on November 11, Professor Hans Ulrich Gumbrecht gave a memorable speech – a “mass,” as some listeners thought. It was not just about trying to create a super-intelligent system with consciousness. No, the goal was now to create a God-like being with superhuman knowledge and abilities to guide our human destiny. However, there is the risk that this God might turn against humanity, he continued, even though it was man-made. The statement that this should free us from Biblical sin was even more surprising.
Gumbrecht is not the first one to raise the subject of Artificial Intelligence (AI) as God. Just recently, the Guardian, under the title ”Deus Ex Machina,” announced that ex-Google collaborator Levandowski wanted to register Artificial Intelligence as religion.[1] Shortly later, Google announced its latest triumph. They had succeeded in building an AI system that learned to win the strategy game “Go” by itself – so well in fact that it could beat the world champion. At the same time, it was suggested that one had now found an approach that would sooner or later solve all the problems of humanity, including those that surpassed our intellectual capacities.
Just a few days later, Spiegel Online wrote: ”God does not need any teachers.”[2] Already in 2013, I discussed the opportunities and risks of the information age in an article entitled “Google as God?”[3] Furthermore, in 2015, the Digital Manifesto asked: “Let us suppose there was a super-intelligent machine with God-like knowledge and superhuman abilities: would we follow its instructions?”[4]
Some readers found the question ridiculous at that time. Not anymore! Because search engines and intelligence services know almost everything about us. We have been living in a Big Brother world already for some time. George Orwell's dystopian novel “1984,” written in 1948, was meant as a warning. But more and more often we get the feeling the bestseller was actually used as an instruction manual.
Today’s data-driven world has two main principles: “Data is the new oil” and “Knowledge is power.” Little by little, and almost unnoticed, this has created a fundamentally new society. There is a new currency, “data,” which replaces classical money. There is a new economic system: the “attention economy,” where our attention is sold by auctions in split seconds. In addition, the companies of “surveillance capitalism” are measuring our behavior, our personality and our lives in ever more detail. In times of free services, we have become a product ourselves. Last but not least, the principle “code is law” has established a new legal system, which bypasses our parliament.
Are we in danger of losing our liberties, human rights and participation step by step, almost imperceptibly? Are we giving up on things that are important to us, just because we fear terrorism, climate change, and cybercrime? Are self-determined citizens in a danger to be turned into remotely controlled subjects?
In fact, this isn’t just fantasy! China is already testing a Citizen Score,[5] i.e. every citizen is rated, has a certain number of points. Minus points will punish those who do not pay for their loan immediately, cross the street during a red light, have the “wrong” friends or neighbors, or reads critical news. The Citizen Score then determines the job opportunities, loan conditions, access to services, and mobility restrictions. Great Britain seems to go even a step further. It measures its citizens including the videos they watch and the music they hear. The system is called “Karma Police.”[6] So, will it punish thought crimes, you may ask? Or is “Karma Police” a kind of “Judgment Day” waiting to come down on us any time?
Do we have to accept this? Computers make better decisions, it is often said. In fact, computers have been the better chess players for years. In many areas they are better workers. They don’t get tired, do not complain, do not go on vacation, and do not have to pay taxes and contributions to social security. Soon they will be better drivers. They diagnose cancer better than physicians and answer questions better than people – at least those that have already an answer.
When will robots become our judge and hangman? When will they start to “fix the overpopulation problem”? (Autonomous killer robots with face recognition probably exist already or could at least exist soon – see the recent movies on slaughterbots and robot swarms.[7]) When will robots replace us? Not just our work… A newspaper article recently suggested that the descendants of humans will be machines.[8] In other words, humanity will be replaced by robots. Is this really our human destiny? Should we build a future for robots or for humans? Isn’t it time to wake up from the transhumanist dream?[9]
Back to the initial question: Is Google creating a digital God? With its Loon project, the company at least tries to be omnipresent. With its search engine, language assistants and measurement sensors in our rooms, Google wants to be omniscient. While the company is not yet omnipotent, it is at least answering 95 percent of our questions, and with personalized information, Google is increasingly steering our thinking and actions. Furthermore, the Calico project is also trying to make people immortal. Therefore, in an overpopulated world, would Google be the judge over life and death?
Whatever, someone recently suggested an AI God would soon write a new Bible.[10] So would he (or she) set the rules we would have to live by? Do we soon have to worship an AI algorithm and submit ourselves to it? No question, some already seem to dream of a digital God who will guide our human destiny. What for some is the invention of God through human ingenuity, however, must be the ultimate blasphemy for Christians – in some sense the rise of the Antichrist.
Whatever one may think about all this, the phrase “knowledge is power” has certainly blown some people’s minds. Google, IBM and Facebook are said to be working on a new operating system for society.[11] Democracy is defamed as outdated technology.[12] They want to engineer paradise on Earth – a smarter planet where everything will be automated. So far, however, the plan did not really work out.[13] The world’s cities with the highest quality of life are located everywhere, but in the leading IT nations. And even in the Silicon Valley, the heart of the digital revolution, and other IT hotspots, experts start to worry…
Elon Musk, for example, fears that Artificial Intelligence could become the greatest threat to humanity. Even Bill Gates had to admit that he was in the camp of those who were worried about superintelligence. The famous physicist Stephen Hawking warned that humans would not be able to compete with the development of Artificial Intelligence. Apple co-founder Steve Wozniak agreed: “Computers are going to take over from humans, no question,” he said, but: “Will we be the gods? Will be the family pets? Or will be ants that get stepped on? I don’t know…”[14] Jürgen Schmidhuber, German AI pioneer, believes to know – from a robot’s perspective, we will be like cats.[15]
Of course, the worry that technology could turn against us is already old. Besides George Orwell’s “1984” and “Animal Farm,” Aldous Huxley’s “Brave New World” warned us of the danger of rising totalitarianism. Suddenly people also remember “The Machine Stops” by Edward Morgan Forster in 1909 (!). More recent books are Dave Egger’s “The Circle,” “Homo Deus” by Yuval Noah Harari and Joel Cachelin’s “Internet God.” If you like science fiction, you might love “QualityLand” by Marc-Uwe Kling or “iGod” by Willemijn Dicke.
A question, which not only science fiction lovers should ask, is: What future do we want to live in? Never before have we had a better chance to build a world of our liking. But for this we have to take the future into our hands. It’s high time to overcome our self-imposed digital immaturity. To free ourselves from the digital shackles, digital literacy and enlightenment are needed. So far, we are living in a market-conform democracy, where the markets are driven by technology. Instead, we should build an economy that serves to reach the goals of people and society. Technology should be a means of achieving this. This requires a fundamental redesign of our monetary, financial and economic system based on the principle of value-sensitive design. In “The Globalist,” I have recently outlined how this could be done.[16] Maybe you have your own ideas of how to use Big Data and Artificial Intelligence. But in any case, a better future is possible! Let’s demand this better future! Let’s co-create it! What are we waiting for?
[1] https://www.theguardian.com/technology/2017/sep/28/artificial-intelligence-god-anthony-levandowski
[2] http://www.spiegel.de/wissenschaft/technik/kuenstliche-intelligenz-gott-braucht-keine-lehrmeister-kolumne-a-1175130.html
[3] https://www.nzz.ch/google-als-gott-1.18049950
[4] http://www.spektrum.de/thema/das-digital-manifest/1375924, English translation: https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/
[5] https://www.economist.com/news/briefing/21711902-worrying-implications-its-social-credit-project-china-invents-digital-totalitarian
[6] https://theintercept.com/2015/09/25/gchq-radio-porn-spies-track-web-users-online-identities/
[7] https://www.youtube.com/watch?v=9CO6M2HsoIA, https://www.youtube.com/watch?v=CGAk5gRD-t0
[8] https://www.nzz.ch/feuilleton/unsere-nachfahren-werden-maschinen-sein-ld.1322780
[9] https://www.nzz.ch/meinung/kommentare/die-gefaehrliche-utopie-der-selbstoptimierung-wider-den-transhumanismus-ld.1301315, http://privacysurgeon.org/blog/wp-content/uploads/2017/07/Human-manifesto_26_short-1.pdf
[10] https://venturebeat.com/2017/10/02/an-ai-god-will-emerge-by-2042-and-write-its-own-bible-will-you-worship-it/
[11] http://www.faz.net/aktuell/feuilleton/medien/google-gruendet-in-den-usa-government-innovaton-lab-13852715.html, https://www.pcworld.com/article/3031137/forget-trump-and-clinton-ibms-watson-is-running-for-president.html, https://www.theguardian.com/technology/2017/feb/17/facebook-ceo-mark-zuckerberg-rule-world-president, http://theconversation.com/if-facebook-ruled-the-world-mark-zuckerbergs-vision-of-a-digital-future-73459
12]
Hencken, Randolph. 2014. In: Mikrogesellschaften. Hat die Demokratie
ausgedient? Capriccio. Video, veröffentlicht am 15.5.2014. Autor:
Joachim Gaertner. München: Bayerischer Rundfunk.
[13] https://www.wiltonpark.org.uk/wp-content/uploads/WP1449-Report.pdf
[14] https://www.computerworld.com/article/2901679/steve-wozniak-on-ai-will-we-be-pets-or-mere-ants-to-be-squashed-our-robot-overlords.html
[15] http://www.faz.net/aktuell/feuilleton/debatten/ueberwindung-des-menschen-durch-selbstlernende-maschinen-15309705.html
[16] https://www.theglobalist.com/author/dirk-helbing/
Showing posts with label Karma police. Show all posts
Showing posts with label Karma police. Show all posts
Tuesday, 13 February 2018
Tuesday, 31 October 2017
ETHICS FOR TIMES OF CRISIS
by Jan
Nagler,1,2 Jeroen
van den Hoven3, and Dirk Helbing1,3,4
(The Pdf of this article can be downloaded here)
Affiliations:
- Computational Social Science, Department of Humanities, Social and Political Sciences, ETH Zurich, Clausiusstrasse 50, CH 8092 Zurich, Switzerland
- Computational Physics for Engineering Materials, IfB, ETH Zurich, Wolfgang-Pauli-Strasse 27, CH 8093 Zurich, Switzerland
- TU Delft, The Netherlands
- Complexity Science Hub, Vienna, Austria
E-mail addresses:
What will happen in a crisis when Artificial Intelligence systems will decide about increasingly many issues, including life and death?
Will a Citizen Score based on Big Data determine our chances of survival?
How should autonomous systems make decisions that are ethically aligned with what is morally required from humans?
We argue that in times of permanent crisis the dominant approach should be innovation instead of optimization.
These days, everyone is talking about artificial intelligence (AI), robots and self-driving cars. We absolutely agree that these technologies have promising applications and perspectives. So, why then are people like Elon Musk warning us that AI may pose the biggest existential threat to humanity? (1) And what does this have to do with mundane things such as autonomous cars, if anything at all?
Autonomous cars will certainly be involved in accidents where people may die. Then the question is (2-4) when fatalities cannot be prevented – should an autonomous vehicle be programmed to run over a crowd of people on the street, swerve into a smaller group of pedestrians on the walkway or sacrifice the lives of the car passengers by ramming into a concrete wall? Should particular kinds of people be privileged, giving them higher chances of survival? For example, should luxury cars be allowed to offer a higher degree of self-protection as compared to cars in the lower price segment, to create incentives to buy a more expensive car? Would this be ethically justified, given that expensive cars cause more accidents and already impose higher risks on others (5)?
These are the kind of ethical dilemmas that are now frequently discussed. But there is a far bigger problem nobody is openly talking about: In the non-sustainable world we are living in, when there are not enough resources left for everyone, will autonomous systems be used to decide about life and death? If yes, how should they decide? Therefore, when we talk about ethical principles governing how autonomous vehicles deal with matters of life and death, one should always keep in mind the implications for scaled up and generic applications in times of crises. Or to put it in a Kantian form: what if the maxims or policies of these types of machines were to be become the universal principles for all machines?
AI systems knowing “who is who”
With better sensor and video technologies and powerful information systems, artificial intelligence (AI) systems are increasingly capable of distinguishing between one person and many, a child and an elderly person, an average person and a famous politician, a white person and a person of colour, a person with a job or without, a rich and a poor person, a convicted criminal and a saint, a healthy person and one who may die soon, a person with health or life insurance and without?
Should people with higher status or life expectancy be protected, because they may contribute more to society? Should a Citizen Score decide, which represents the value of a person from the point of view of the government, as it is currently being tested in China in other areas of life? (6) Should a person who pays a higher insurance premium have a higher chance of survival and others be sacrificed? This sounds like a profitable business model, but it would fundamentally contradict the principle of equality and human dignity, on which the United Nations’ Universal Declaration of Human Rights is built.
Criteria such as health, age, or social status are not suitable criteria to decide who should come to harm, live or die, not even from a narrow utilitarian perspective. Recall that Kant, the father of Enlightenment, who inspired modern democratic constitutions, wrote his masterpieces at old age. Van Gogh had a very low social status during his lifetime. Mozart died poor. Beethoven was almost deaf when he wrote his 9th symphony. Degas and Toulouse-Lautrec were handicapped and Monet had impaired sight, but they became three of the most important painters of Impressionism. These individuals have created some of the greatest cultural achievements in the history of humankind. Whatever measure is taken to distinguish the value of people, there are always examples that show the inappropriateness of such a measure.
Utilitarian thinking can be inappropriate
The current rulings of many constitutional courts and ethical committees largely agree that people should not be valued differently, but share a common humanity and human dignity. This is also a lesson learned from the history of fascism and the Holocaust. Utilitarian “optimization” appears to be highly immoral, if it intentionally exposes different kinds of people to different life-threatening risks. For example, the new Hippocratic oath (7) requires doctors to swear: “I will not permit considerations of age, disease or disability, creed, ethnic origin, gender, nationality, political affiliation, race, sexual orientation, social standing or any other factor to intervene between my duty and my patient.” Perhaps, given that “code is law”, we should require a similar oath from computer scientists and software engineers to ensure design for values (8).
Also note that autonomous systems based on utilitarian principles would be easily exploitable – both by criminals and authorities. Deterministic decisions could be successfully instrumentalized to harm people in cases of manipulation or hacking. For example, someone may jump on the street and force a deterministically deciding vehicle to crash into a concrete wall, or someone may trigger or hack the vehicle sensors to trick it into a dangerous manoeuvre that might put passengers at risk. Other drivers may anticipate the safety behaviour of the type of car you are driving and exploit its response repertoire for personal advantage. In such cases, a probabilistic decision rule would make it less likely that an autonomous system could be successfully instrumentalized against other people.
Overall, if ethical dilemmas cannot be avoided, decisions should be randomized, giving each person the same weight. A society with ubiquitous AI requires a framework that is impartial, as proposed by the Harvard political philosopher John Rawls with his concept of “the veil of ignorance” (9). This implies that, in deciding about the basic normative principles of a society, one should ignore properties that could be tailored to serve self-interests. This, again, suggests that humans should not be treated differently in a critical situation and solutions based on utilitarian grounds should be rejected.
Killing algorithms are not science-fiction
Very soon, the ethics of autonomous systems may affect all of our lives every day. In turbulent times, as we may encounter them in an unsustainable world, decisions about life and death may become commonplace. Today, robocops are being tested, drones are being used to kill dissidents, and a number of autonomous weapons are in the making (10, 11). Some experts even think about AI-based euthanasia (12) and the use of palliative means. Soon, computer-controlled implants may be used to release drugs to our bodies, but such devices would be vulnerable to hacking and may cause overdoses (13, 14).
Let us assume or a moment, one would apply the Citizen Score, as it is currently tested in China, or the United Kingdom’s KARMA POLICE program (15) to make decisions about life and death – saving those who have a higher score. Such a “digital judgment day” approach would create one of the most serious moral hazards imaginable. “The elite”, i.e. the people with the highest scores, would always have the lowest risks and the greatest opportunities. Therefore, why should they make a serious effort to improve the opportunities and risks of all the others, if it will not improve their own lives? In contrast, an unbiased probabilistic decision rule in combination with a fair veil of ignorance, would put everyone at the same level of risk, and hence everybody would have an incentive to reduce the number of ethical dilemmas as much as possible.
Humanity has a moral obligation to prevent the occurrence of ethical dilemmas, i.e. choice situations where one cannot fulfil all moral norms at the same time. Furthermore, if ethical dilemmas occur nevertheless, there is an obligation to transform them into situations which expand the set of obligations one can satisfy, whenever possible. A Citizen-Score-based system would certainly miss this goal. It provides a framework suggesting we can fulfill our moral duties by optimization that weighs and counts lives and deaths, and that such a decision can be automated and dealt with by a machine.
To minimize the number of critical situations, we do not only need the best use of human and artificial intelligence, but creativity as well. In a crisis, innovation may be more important than optimization. To successfully address the sustainability challenges of our planet, we may have to fundamentally change the monetary, financial, and economic system, or even the organization of society altogether (16,17). Given the limitations of optimization, Citizen-Scores-based systems and utility maximization, we should spend much more resources on systemic innovation, e.g. on participatory resilience and City Olympics, where cities all over the world and the regions around them would regularly compete for the best environmental-friendly, energy-efficient, resource-saving, crisis-proof, and peace-promoting solutions (18). Such approaches could dramatically improve the future prospects of humanity within a short period of time.
To minimize the number of critical situations, we do not only need the best use of human and artificial intelligence, but creativity as well. In a crisis, innovation may be more important than optimization. To successfully address the sustainability challenges of our planet, we may have to fundamentally change the monetary, financial, and economic system, or even the organization of society altogether (16,17). Given the limitations of optimization, Citizen-Scores-based systems and utility maximization, we should spend much more resources on systemic innovation, e.g. on participatory resilience and City Olympics, where cities all over the world and the regions around them would regularly compete for the best environmental-friendly, energy-efficient, resource-saving, crisis-proof, and peace-promoting solutions (18). Such approaches could dramatically improve the future prospects of humanity within a short period of time.
References
- S. Gibbs, The Guardian, Monday 27 October 2014, available at https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
- B. Deng, Nature 523, 24-26 (2015).
- J.-F. Bonnefon et al., Science 352, 1573-1576 (2016).
- D. Leben, Ethics Inf. Technol., 19:107-115 (2017).
- The Telegraph, Nov 18, 2015 (available at http://www.telegraph.co.uk/finance/personalfinance/insurance/motorinsurance/11993627/Its-official-drivers-of-luxury-cars-cause-more-accidents-insurers-say.html).
- D. Storm, ACLU: Orwellian Citizen Score, China’s credit score system, is a warning for Americans. Computerworld (7 October 2015); available at http://go.nature.com/3pq8b4, and at http://www.independent.co.uk/news/world/asia/china-surveillance-big-data-score-censorship-a7375221.html
- https://www.wma.net/policies-post/wma-declaration-of-geneva/
- J. van den Hoven, P. E. Vermaas, I. van de Poel, Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, Springer (2015).
- K. B. Rasmussen, Philos. Stud. 159:205-218 (2012).
- S. Russell, S. Hauert, R. Altman, and M. Veloso, Nature 521, 415-418 (2015).
- IJCAI conference, July 28 (2015). Open letter initiative; available at https://futureoflife.org/open-letter-autonomous-weapons/
- F. Hamburg, Een computermodel voor het ondersteunen van euthanasiebeslissingen (E.M. Meijers Reeks)
- Hackers remotely kill jeep highway, Wired, July 24, 2015; available at https://www.wired.com/2015/07/hackers-remotely- kill-jeep-highway/
- Hackers reveal nasty new car attacks, Forbes, July 24, 2013; available at https://www.forbes.com/sites/andygreenberg/2013/ 07/24/hackers-reveal-nasty-new-car-attacks-with-me-behind-the-wheel-video/#45c73d7b228c.
- https://www.theverge.com/2015/9/25/9397119/gchq-karma-police-web-surveillance http://www.dailymail.co.uk/news/article-3249568/GCHQ-spooks-spied-internet-user-operation-called-Karma-Police-according-leaked-documents.html
- D. Helbing, Nature 497, 51-59 (2013).
- D. Helbing and E. Pournaras, Nature 527, 33-34 (2015)
- https://www.theglobalist.com/technology-big-data-artificial-intelligence-future-peace-rooms/
Labels:
Artfiicial Intelligence,
citizen score,
Dirk Helbing,
facism,
humanity,
Jan Nagler,
Jeroen van de Hoven,
Karma police
Subscribe to:
Posts (Atom)