Saturday 28 November 2020

Digital Democracies will be able to combine the best of all systems: Interview with DIrk Helbing

Digital Democracies will Be Able to Combine the Best of all Systems

Sofia, November 20 (Nikolay Velev of BTA)

Around the world, we are seeing the rise of different forms of technological totalitarianism but the future digital democracies will be able to combine the best of all systems: competition from capitalism, collective intelligence from democracies, trial and error, and the promotion of superior solutions, and intelligent design (AI), Dirk Helbing, a professor of computational social science, said in a BTA interview. He works at the Department of Humanities, Social and Political Sciences of the Swiss Federal Institute of Technology in Zurich and was among the participants in an international online conference on The Impact of Artificial Intelligence on Our Society hosted by Sofia on Friday. Following is the full interview:

Q: Prof. Helbing in your article Will Democracy Survive Big Data and Artificial Intelligence, you use Kant's famous quote "What is Enlightenment?": "Enlightenment is man's emergence from his self-imposed immaturity. Immaturity is the inability to use one's understanding without guidance from another." How far AI has its roots in the Enlightenment project? Would you give a short answer to the question you ask in the "Google as God?"
A: The problems of the world are complex. Some think we need superintelligent systems to help us understand and solve our problems. Some would go so far that such superintelligent systems will soon be around, and they would be so much superior to today's human beings that they would be similar to Gods, and we should just do what they say. This would mean to give up our autonomy and freedom. Control would be handed over to machines. While some consider this a utopian paradise, others see it as an dystopian future and ultimate threat to humanity.

Q: The big concern of many theorists and critics of industrial capitalism in the late 19th century and the first half of the 20th century was that men put a lot in their work and were paid too little for that, had a low living standard and existed merely as an appendix to the machine. With so much of the present-day forms of labor involving the virtual space, do you think man-machine relations have changed?
A: Companies use Artificial Intelligence and robotics to increase the efficiency of their production and reduce costs. As long as people have to pay taxes for their work and robots do not, this is a pretty unfair competition. It could get millions of workers in trouble. It is clear that we need a new tax system and also a new social contract so that future societies would work for humans (and robots would work for them, too). Currently, there is a danger of a race between man and machine, while the goal should be to create a framework for human-machine symbiosis. We still have to work on the political, societal and economic framework for this, and have to do it quickly.

Q: What, if any, is the threat for the modern man from having a digital copy of himself in the Big Data? What are the worst forms of misuse of this data and how far misuse can go?
A: Today, everyone of us is being profiled and targeted. It seems that every day many megabytes, if not gigabytes of data are being collected about us. This leads to highly detailed digital doubles, which reflect our economic situation and consumption behavior, our social network, our behavior, psychology, and health. Such data can be used to manipulate our thinking, emotions, and behaviors. It can be used to manipulate elections. It can be used to mob us and exert pressure on us. And it can be used for life and death decisions, as it happens in ethical dilemmas such as triage decisions. Most of such data uses today seem to happen without our explicit knowledge and consent. This is why I demand a platform for informational self-determination.

Q: You say that Big Data is "the oil of twenty first century, but people increasingly add that, apparently, we haven't invented the motor yet to use it". What do you think could be the motor and how the economy would change once we invent it and put it to use?
A: Some consider Artificial Intelligence (AI) to be the motor that runs on Big Data, the so-called "new oil". However, while many of us have an own car, most AI systems currently work for the interests of very few people only. I think that we would have to build something like a digital catalyst: a sufficiently open information ecosystem that everyone can easily contribute to and seamlessly benefit from. This would allow for combinatorial innovation.

Q: You quote a researcher claiming that AI can overperform the human brain by 2030 and all human brains by 2060. Is there anything in man that the machines cannot beat?
A: Big Data and AI neglect whatever cannot be measured well. This concerns human consciousness, love, freedom, creativity, and human dignity, for example. These are characteristics that matter a lot for humans. Therefore, I think that humans are not just biological robots. We should not confuse the two. Being confronted with intelligent machines will eventually show us what it really means to be human and what is special about us.

Q: You say that 90 per cent of the present-day professions are based on skills which can soon be replaced by machine algorithms or robots. If that many people lose their jobs, should we expect new social stratification based on virtual space? Do you see a risk for such possible stratification to bring about a repetition of some events of the 20th century?  

A: Such revolutions have typically brought serious societal instabilities along with them such as wars. A lot of people died on battlefields, because social reforms were lagging behind. If we are not smarter this time, such a situation could indeed happen again. This time, however, autonomous systems might decide about lives and deaths. Such deadly triage decisions would call the very foundations of our civilization in question. Triage means a war-like regime, in which human rights and human dignity are restricted so much that even the right to live is called in question.

Q: In your book Towards Digital Enlightenment: Essays on the Dark and Light Sides of the Digital Revolution, you use the term "digital fascism". Could you tell us how it differs from the traditional totalitarian ideologies in the 20th century and what new threats it brings for society?

A: Around the world, we are seeing the rise of different forms of technological totalitarianism. The Western variant is often called "surveillance capitalism", and people are treated according to scores such as the "customer lifetime value". In other countries such as China, a behavior-based "social credit score" determines the rights, opportunities, and lives of people, who are thereby turned into submissive subjects. These systems are characterized by a strange combination of a digitally enabled communism (command economy), feudalism (due to their hierarchical nature), and fascism (due to their suppressive nature). Typical elements of today's digital societies are: mass surveillance, experiments with people, behavioral manipulation and mind control, social engineering, propaganda and censorship, forced conformity, (predictive) policing, the interference with privacy and human rights, and the management of people similar to objects, ignoring human dignity.

Q: If fascism has become digital, what should be the resistance against it?
A: This is a difficult question, as nobody can really escape surveillance in the cyber-physical world of today. Avoiding digital devices and platforms does not seem to be a real option. Usually, politicians and courts should ensure a benevolent use of technology, but in view of "overpopulation" and "lack of sustainability", they seem to be losing control, as the "Corona emergency" and "climate emergency" show. Nevertheless, we need to ensure a fair use of AI that benefits everyone according to the principle of equal opportunities. Everything else is destined to fail, if you ask me. We are in the middle of a struggle with the old powers reigning the material, consumption-oriented, carbon-based energy world. We need to break free from the shackles of that era to enter a new age of peace and prosperity.

Q: Does global communication on Internet help or hinder democracy?
A: Both. Despite efforts to control global communication through algorithms, there is a lot of self-organization in the increasingly networked world we are living in. It is time to upgrade democracies with digital means, because empowering citizens and civil society is going to make our countries more resilient, such that they can better cope with the challenges ahead of us.

Q: Do we need an entirely new form of democracy today or we need to stand up for what the Western societies have achieved?
A: We need to develop further what we have achieved in the past. In fact, digital democracies will be able to combine the best of all systems: competition (capitalism), collective intelligence (democracies), trial and error, and the promotion of superior solutions (evolution and culture), and intelligent design (AI). Digital democracies will make our societies resilient to challenges and surprises, disasters and crises, by a combination of redundancies, diverse solutions, decentralized organization, participatory approaches, solidarity, and digital assistance supporting self-organization and mutual help.

NV Source: Sofia

 

Reprinted with kind permission from Bulgarian News Agency. Link to Post

Thursday 26 November 2020

DEATH BY ALGORITHM: WHY WE SHOULD LEARN TO LIVE, NOT LEARN TO DIE

Letting algorithms decide how to make planet Earth a better place, may not always be a good idea, in particular when they are allowed to decide about life and death. Have we already arrived in a dystopian digital world?

With the pandemic gaining traction, a question absent for decades suddenly re-enters the world stage: triage and the question of who should die first, if capacities are not enough for everyone. This reminds one of some of the darkest chapters in human history. In fact, people have been working on the subject of computer-based euthansia already for some time. Such questions emerged long before the Corona virus pandemic – due to humanity’s overconsumption of resources.

“Learning to Die in the Anthropocene” is the title of Roy Scranton’s bestseller published in 2015. The Anthropocene, the age in which mankind shapes the fate of planet Earth, comes with existential threats, as reflected, for example, by the UN Agenda 2030. We seem to be stuck in a dilemma between continuing our beloved everyday habits of an exploitative life – and knowing that we should change our behavior. So, what would be more obvious than asking Artificial Intelligence to fix the world?

Maybe we should think twice. The question is dealt with by a number of daredevil science fiction novels such as Frank Schätzing's 'Tyranny of the Butterfly', which ‘solve’ the sustainable development problem in a cruel way – in order to question such solutions.

But how far from reality are these fiction worlds? Terms such as “depopulation” and “eugenics” have been circulating in think tanks and workshops around the world for quite some time.

Is dystopia already here, given that AI is helping to triage Corona virus patients? Are we now confronted with the “trolley problem” and need to make tough decisions, as some people suggest?

The “trolley problem” is a so-called moral dilemma that has often been discussed in connection with autonomous cars. It has been suggested that it’s about saving lives, but in fact it asks the question: “if not everyone can survive, who has to die?”.

If one does nothing, several people will be run over by a trolley – or car. If one interferes, however, fewer people will die – but some people will be actively killed. Today's jurisdiction prohibits this, also because there would otherwise be circumstances enabling one to murder people as collateral.

“Lesser evils” are still evils. Once our society starts to find them acceptable, one can knock down every foundational principle of our constitution – including the right to live. Suddenly, shocking questions appear to be acceptable such as: “If an autonomous car cannot brake quickly enough – should it kill a grandmother or an unemployed person?” Such kinds of questions have been recently asked within the so-called “moral machine experiment”. By now, however, it has been judged that such experiments are not a suitable basis for policy making. People would anyway prefer an algorithm that is fair. Potentially this would mean to take random decisions.

Of course, we do not want to suggest that people should be randomly killed – or killed at all. This would in grave contradiction of human dignity, even if it was a painless death. Our thought experiment, however, suggests that we should make a greater effort to change the world.

We should not accept the trolley problem as a given reality. If it produces unacceptable solutions, we should change the setting, e.g. drive more slowly or equip cars with better brakes and other safety technology. Coming back to planet Earth – the sustainability problem would not have to be there. It is our current way of doing business, our economic organization, today’s mobility concept and conventional supply chain management, which are the problems. Why don’t we have a circular and sharing economy yet – 50 years after the “Limits to Growth” study? This is the question we should ask. Why haven’t we been better prepared for a global pandemic, if it was predicted to happen?

Big Data, Artificial Intelligence and digital technologies have prepared us surprisingly little for the challenges we are currently faced with, be it “climate emergency” or “Corona emergency”, migration or terror. And it has a reason: While it sounds good “to optimize the world” in a data-driven way – optimization is based on a one-dimensional goal function, mapping the complexity of the world to a single index. This cannot be appropriate, and it does not work well. It largely neglects the potentials of immaterial network effects and underestimates human problem-solving capacity as well as the world’s carrying capacity.

Nature, in contrast, does not optimize. It co-evolves, and is doing much better, for example, in terms of sustainability and circular supply networks. Our economy and society could certainly benefit a lot from bio-inspired, eco-system kinds of solutions, particularly symbiotic ones.

In challenging times like these, it is important to organize and manage the world in a resilient way. This is the best insurance not to end up with problems like triage. We need to be able to flexibly adapt to surprises and recover from shocks such as disasters and crises. In these troubled times, instead of “learning to die”, we should “learn to live”. Resilience can in fact be increased by a number of measures, including redundancies, diverse solutions, decentralized organization, participatory approaches, solidarity, and digital assistance – solutions that should be locally sustainable for extended periods of time.


Dirk Helbing, Professor of Computational Social Science, ETH Zürich, Switzerland (dirk.helbing@gess.ethz.ch), Link to Google Scholar

Peter Seele, Professor of Business Ethics at USI Lugano, Switzerland (peter.seele@usi.ch), Link to Google Scholar

An edited version of this contribution has been published as OpEd in Project Syndicate

Link here



Thursday 5 November 2020

HOW ANTICIPATION CAN END UP BEING UNETHICAL, IMMORAL OR IRRESPONSIBLE

Suppose, we tried to anticipate the future of plant Earth: What could possibly go wrong?

Imagine one day, models of the future of Earth would predict the collapse of economy and civilization, and a dramatic drop in population, e.g. due to anticipated resource shortages. What might be the issues of such an „apocalyptic“ forecast?

First of all, remember that – even though some are useful – “all models are wrong”. This may lead to wrong conclusions and actions, which could cause harm. (I expect such a situation, for example, for current “world simulation” approaches. These neglect important innovation, interaction, context and network effects, particularly symbiotic ones, which could considerably increase the carrying capacity compared to current estimates.)

Think tanks may start discussing the consequences of an “apocalyptic scenario”, and propose what to do about it. First of all, they might conclude that resources would not be enough for everyone, and hence access to them would have to be surveilled, centrally controlled, and prioritized, say, by means of some kind of citizen score. Therefore, it seems that emergencies would require and allow one to overrule human rights. One might argue that democracies would have to be overhauled and replaced by a global technocracy that manages people like things.

One might argue that one would have to act according to the principle of the “smaller evil”, as exemplified by the “trolley problem”. According to this, any regulation, law or constitutional principle could be teared down for some supposedly overarching principle (such as “global health”). This could even touch the right to live, which might be overruled by triage decisions.

Now, suppose such considerations would lack transparency. They would then be discussed mostly by insiders, but not by parliaments, the science community or general public at large “due to the sensitivity of the issue”.

Then, these insiders may start working on their own solutions of the problem, without democratic legitimacy, and turn problems of life and death into profitable business models.

From then on, these problems would be mainly seen from the perspective of profit maximization. The bigger the problem or the greater the emergency, the more profitable things would get…

The excess deaths would be handled by the triage principle, and saving lives would probably not be a priority anymore.

This is how anticipating an ”apocalypse” can actually cause an apocalypse like a self-fulfilling prophecy, and overrule all and any ethical principle, even if factually such a scenario would not have to happen at all (which is what I think).

I would not be surprised if there were people smart enough to understand that, if one wanted to replace democracies by hierarchical, neofeudalistic systems, disasters and crises would be just the perfect means to accomplish the job.

In any case, to avoid that anticipation ends up with irresponsible, immoral or even criminal action or neglection, we need a suitable ethics of anticipation.

Anticipation, of course, does not have to be a bad thing. It can open up our minds for opportunities and risks. Such insights should be transparently and publicly evaluated and discussed.

Modeling complex systems, for example, has provided us with a better understanding of traffic jams, crowd disasters, and epidemic spreading. This can be used to reduce problems and risks. Such models can guide proactive measures that can avoid or mitigate potential trouble and harm.

In case of expected resource shortages, it is clear what needs to be done: reduce resource consumption, where possible, build reserves, increase capacity, improve resource efficiency, build a circular economy, share goods and services.

However, this is not all: When faced with uncertainty (which is the case, when probabilistic effects are coupled with network-related cascading effects), damage may hardly be predictable and large. In such cases, a resilient organization of society is in place, as we need to be able to flexibly adapt to surprises and recover from shocks such as disasters and crises.

Resilience can in fact be increased by a number of measures, including redundancies, diverse solutions, decentralized organization, participatory approaches, solidarity, and digital assistance – solutions that should be locally sustainable for extended periods of time.

Note that these solutions are very different from global surveillance, behavioral control and triage. They are rather in support of digital democracy, City Olympics, a socio-ecological finance system, and democratic capitalism. In other words, even if the analysis “there is trouble ahead” (assuming a lack of sustainability) were correct, the conclusions and measures should have been very different.