Wednesday, 5 December 2018

WHEN CODE IS LAW, ALGORITHMS MUST BE MADE TRANSPARENT


by
Dirk Helbing (ETH Zurich/TU Delft/Complexity Science Hub Vienna) and Peter Seele (USI Lugano)
 
The rule of law as one of the key pillars of open democratic societies is currently challenged by private companies digitally shaping societies. In times where algorithms determine more and more what can and cannot be done in everyday life, “code is law” (1). Currently, however, code is neither being passed by parliaments, nor do the people have a say in how the algorithm-driven world is going to work.

As put forward in 1998 under the headline of “lex informatica” (2), technology itself has become a regulator. Following the original argument, it is code that determines the freedoms of the individual as well as of the legal system: “The importance of our commitment to fundamental values, through a self-consciously enacted constitution, will fade. We will miss the threat that this age presents to the liberties and values that we have inherited. The law of cyberspace will be how cyberspace codes it, but we will have lost our role in setting that law” (3). 18 years later, we see that this has become true on the scale of entire societies. In particular, in today’s attention economy, nudging, neuro-marketing, scoring, social bots, personalized pricing and AI-based content filtering have undermined open democratic discourse, i.e. the very basis of deliberate democracies built on collective intelligence, participation and openness.

Besides chat bots and personalized messages steering public opinion and manipulating elections with the help of social media (see the case of Cambridge Analytica and Facebook), it is a problem of today’s surveillance capitalism (4) that a few large Internet companies download a very detailed picture of our lives for free and give us little economic benefits and little choice of how this data is being used. This turns people into objects, which contradicts human dignity, and gives rise to obscure business models and misuse, which discriminate people and disrespect human rights.

“Creative destruction” as postulated by J. Schumpeter (5) as one of the key pillars of capitalism seems fine, but it must happen within reasonable limits. After the experience of World War II, the Third Reich, and the Holocaust, we cannot allow it to shake the very foundations of civilized life. But as law makers struggle to keep up with the pace of the digital revolution and its disruptive changes, how can we make sure code will be working in the best interest of humanity and all of us?

Restrictive regulations would slow down innovation. A similar thing would apply, if a new authority would have to approve algorithms before their deployment, if they may interfere with the way society evolves. The only way to manage the challenges of the digital age at the pace of digital innovation is algorithmic transparency. But how to achieve it without distorting competition in a free market society subscribing to deliberative democracy? Based on promises and self-declarations of companies? Certainly not. Just recently it has been proven that big tech companies will not change, until government steps in (6). Self-regulation as proposed by private actors in industry is increasingly critiqued as ineffective, even as “having burglars install your locks”, as put forward by Rob Moodie et al. in an interview (7) based on a Lancet study on the influence of company lobbying on public goods (8). 

Some progress is already on the way. Non-governmental organizations (NGOs) like AlgorithmWatch are concerned about algorithmic decision making (ADM), particularly its inherent dangers. AlgorithmWatch calls algorithmic decision making procedures a “black box” and, therefore, they have put together “the ADM Manifesto”, stating that the “creator of ADM is responsible for its results. [But] ADM is created not only by its designers” (9). The debate about creation and responsibility reveals the challenges in times when some algorithms already create other algorithms, while the question of responsibility and liability requires the existence of a legal entity. 

Given the legal, ethical and commercial difficulties in governing algorithms, we plea for algorithm transparency with a delay, based on the legal construct of intellectual property right protection. In analogy to patent protection, we propose algorithmic protection – but given the speed of digital innovation for a period of at most 24 months rather than decades. Within this time period, companies would typically make 95 percent of their profits, and new software versions would come out. After 24 months, the code would be unlocked and made open source. It is suggested, however, that exceptions apply, for example for code that touches national or cyber security, which would need separate quality and security control mechanisms. For all other code we suggest that companies, scientific institutions, NGOs and/or civil society would check whether the algorithms were consistent with human rights and with the values of our societies, or whether they discriminated, manipulated, obstructed, or harmed people. In such a way, violations of data protection laws, the discrimination of people (e.g. by certain personalized pricing schemes), or breaches of human rights would be revealed, such that feedback loops would set in, promoting better quality standards in the future. This would support a design for values (10), as they are laid out in our constitution, the Universal Declaration of Human Rights, or the UN Sustainable Development Goals. The IEEE, the biggest organization of engineers worldwide, supports a similar approach by demanding ethically aligned design (11). 

As a further benefit of algorithmic transparency with a delay, everyone could learn from each other’s code. This would promote combinatorial innovation, which could benefit everyone and may lead even to the prevention of conflict and the promotion of peace (12). It would also be the basis of a true information and innovation ecosystem, in particular if all personal data would be made accessible based on the principle of informational self-determination (13).

Many billionaires have recently decided to donate half of their fortune. It is time to extend this philanthropic principle to algorithms and data. Small and medium-size businesses, spin-offs, scientific institutions, NGOs and civil society can only make significant contributions to a better future, if they get access to sizable amounts of data and powerful ways of processing them.[1]

In accordance with one of the UN sustainable development goals, following our proposal would lead to an inclusive digitization, the “digitalization 2.0” (14). Given the serious sustainability crisis of our planet, which threatens one sixths of all species (15), it is our responsibility to unlock the potentials of data and algorithms to the benefit of our planet and the species living on it. In times, where the Earth is geared towards global emergencies, which puts many lives at risk, we must promote more resilient forms of society and more cooperative forms of innovation. Opening up algorithms after 24 months and establishing full informational self-determination when it comes to our data (13) is a feasible approach, which can largely accelerate the progress of humanity towards solving its existential problems and achieving a higher quality of live for everyone. What keeps us from doing this now?
    1. Lessig, L. Code and Other Laws in Cyberspace (Basic Books, New-York, 2000).
    2. Reidenberg, J. R. Lex informatica: the formulation of information policy rules through technology. Texas law Review 76, 553-594 (1998).
    3. Lessig, L. Code is law: on liberty in cyberspace. Harward Law https://harvardmagazine.com/2000/01/code-is-law-html (2000). 
    4. Zuboff, S. A digital declaration. Frankfurter Allgemeine http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshan-zuboff-on-big-data-as-surveillance-capitalism-13152525.html (2014). 
    5. Schumpeter, J. Capitalism, Socialism and Democracy. Routledge, London, (1994) [1942]. 
    6. Mahdawi, A. Google’s snooping proves big tech will not change – unless governments step in. The Guardian https://www.theguardian.com/commentisfree/2018/aug/14/googles-snooping-proves-big-tech-will-not-change-unless-governments-step-in (2018). 
    7. Oswald, K. Industry involvement in public health “like having burglars fit your locks”. MedwireNews https://www.news-medical.net/news/20130215/Industry-involvement-in-public-health-e28098like-having-burglars-fit-your-lockse28099.aspx (2013). 
    8. Moodie, R et al. Profits and pandemics: prevention of harmful effects of tobacco, alcohol, and ultra-processed food and drink industries. The Lancet 381, 670-679 (2013).
    9. Algorithm Watch https://algorithmwatch.org/en/the-adm-manifesto/ 
    10. Design for Values, http://designforvalues.tudelft.nl/ 
    11. IEEE Global Initiative. Ethically aligned design Version 1 and 2. http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf (2016) and http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf (2018). 
    12. Helbing, D., Seele, P. Sustainable development: turn war rooms into peace rooms. Nature 549, 458 (2017). doi:10.1038/549458c 
    13. Helbing, D. How to stop surveillance capitalism. The Globalist https://www.theglobalist.com/capitalism-democracy-technology-surveillance-privacy (2018). 
    14. Helbing, D. (ed.) Towards Digital Enlightenment (Springer International Publishing, 2018). 
    15. Urban, M. C. Accelerating extinction risk from climate change. Science, 348, 571–573 (2015).

 



[1] To avoid misuse, however, access to data, code, and functionality should be proportional to qualification and a reputation for responsible use.

Friday, 30 November 2018

IS THE “MORAL MACHINE” A TROJAN HORSE?

by Jan Nagler 1, 2 and Dirk Helbing 2,3,4

How should self-driving vehicles faced with ethical dilemmas decide? 

This question is shaking the very foundations of human rights.


In the “Moral Machine Experiment” (1), Awad et al. perform an international opinion poll on autonomous vehicles. While the authors emphasize not to blindly follow local or majority preferences, they highlight challenges that policymakers must be aware of if special groups of people are not given a special status. This may push politicians to follow popular votes, while car manufacturers already pay attention to opinion polls (2).

However, is a crowd-sourced ethics approach appropriate to decide, whether to prioritize children over elderly people, women over men, or athletes over overweight persons? Certainly not. The proposal overhauls the equality principle, on which many constitutions and the UN Charter of Human Rights are based.

While we acknowledge that laws have to be adapted and upgraded to account for emerging technologies, and that moral choices may be context-dependent, changing the most fundamental ethical principles underlying human dignity and human rights in order to more successfully market new technologies may result in a rapid erosion of the very basis of our societies.

Giving up the equality principle (as Citizen Scores do) could easily promote a new, digitally based feudalism. Moreover, in an unsustainable, “overpopulated” world, “moral machines” would be Trojan Horses: they would threaten more human lives than they would save. Autonomous AI systems (not necessarily cars or robots) would potentially introduce principles of hybrid warfare to our societies.

Instead of just managing moral dilemmas, we must undertake all reasonable efforts to reduce them. Therefore, we propose that autonomous and AI-based systems should conform with the principle of fairness, which suggests to randomize decisions, giving everyone the same weight. Any deviation from impartiality would imply advantages for a select group of people, which would undermine incentives to minimize risks for everyone.


(1) E. Awad et al., The Moral Machine experiment, Nature 562, 59-64 (2018)
(2) A. Maxmen, Self-driving car dilemmas reveal that moral choices are not universal, Nature 562, 469-470 (2018)


Affiliations:
  1. Frankfurt School of Finance and Management, Adickesallee 32-34, Frankfurt, Germany
  2. Computational Social Science, Department of Humanities, Social and Political Sciences, ETH Zurich, Clausiusstrasse 50, CH-8092 Zurich, Switzerland
  3. TU Delft, Faculty of Technology, Policy, and Management, The Netherlands

  4. Complexity Science Hub, Vienna, Austria



    E-mail addresses: j.nagler@fs.de; dhelbing@ethz.ch

Comment on Awad et al., The Moral Machine Experiment, Nature 562, 59-64 (2018);Link: https://www.nature.com/articles/s41586-018-0637-6

Sunday, 8 July 2018

On the Use of Big Data and AI for Health

Pitfalls of Big Data Analytics

High-precision medicine requires reliable decisions whom to treat best in what way, when and with what dose of what medicine, ideally even before a disease breaks out. This challenge, however, can only be met with large amounts of personal and/or group-specific data, which may be extremely sensitive, as such data may be used against the interest of the patients (e.g. in the interest of profit maximization). Consequently, there are plenty of technical, scientific, ethical and political challenges.

This situation makes it particularly important to protect personal data from misuse by means of cybersecurity, to ensure a professional use of the data, and to implement suitable measures to achieve a maximum level of human dignity (including informational self-determination).

In the past, empirical and experimental analyses have often been suffering from lack of data or small amounts of data. In many areas, including medical studies, this has changed, or is about to change. Big Data is, therefore, promising to overcome some common limitations of previous medical treatments, which were often not personalized, imprecise, ineffective and connected with many side effects.

In the early days of Big Data, people expected to have found a general purpose tool, something like a holy grail. It was believed that, if one had just enough data, data quantity would turn into data quality; the truth would basically reveal itself. This idea is probably best expressed by a quote by Chris Anderson, who – back in 2008 – predicted “the end of theory” and wrote in the Wired Magazine: “The data deluge makes the scientific method obsolete.”

Along these lines it was claimed that it would now be possible to predict, or at least to “nowcast” the flu from Google searches, as reflected by the platform Google Flu Trends. The company 23andMe offered to identify ethnic origin, phenotype, and likely diseases. Angelina Jolie said “knowledge is power” and had her breasts removed, because her genetic test identified a high chance she would get breast cancer.

Later on, Google Flu Trends was closed down, doctors warned that Angelina Jolie should not be taken as example, and 23andMe’s genetic test was temporally taken off the market by the health authority. How could this happen? Google searches were not anymore a reliable measurement instrument, as Google had started to manipulate people with suggestions (both through the autocomplete function and by means of personalized advertisement). Regarding attempts to predict diseases by means of genetic data, it was discovered that some people were doing very well, even though they were predicted to be very ill. Moreover, predictions were sometimes quite sensitive to adding or subtracting data points, to the choice of the Big Data algorithm, or (in some cases) even to the hardware used for the analysis.

Generally, it was thought that – the more data one would have the more accurate the implications of data analyses would be. However, the analyses often took correlations for causation, and they were not checking for statistical significance – in many cases, it was not even clear what the appropriate null hypothesis was. So, in many cases, Big Data analytics was initially not compatible with established statistical and medical standards.

In fact, the more data one has, the higher the probability to find patterns in the data just by chance. These patterns will often be not meaningful or significant. Spurious correlations are a well-known example for this problem. These are correlations that do not reflect a causal relationship, or where a third factor causes two effects to correlate, where neither effect influences the other. In such cases, increasing or decreasing the measured variables would not have the expected effect. It could even be counterproductive. Careful causality analysis (by concepts such as Granger causality) are, therefore, absolutely required.

Another problem concerns undesirable discrimination. Suppose a health insurance wants to incentivize certain kinds of “healthy” diets – by reducing tariffs for people who eat more salad and less meat, for example. As a side effect, it would then be likely that men will pay different tariffs from women, and Christians, Jews, and Muslims would on average pay different tariffs as well, just because of their different religious and cultural traditions. Such effects are considered discriminatory and need to be avoided. If one, furthermore, wants to avoid discrimination based on age, sexual orientation and other features that should not be discriminated against, Big Data analytics becomes a quite sophisticated challenge.

Last but not least, even Big Data analytics will produce errors of first kind and of second kind, i.e. false alarms and alarms that don’t go off. This is a problem for many medical tests. Say, a medical test costs x and a correct diagnosis creates a benefit of y, while a wrong one will cause a damage of z. Moreover, assume that that the test is correct with probability p and incorrect with probability (1-p). Then, the overall utility of the test is u = – x + p*y – (1-p)*z, which might be neutral or even negative, depending on the impact of wrong diagnoses. For example, false negatives are an issue for many kinds of cancer, and it is therefore sometimes advised, not to test the entire population.

In conclusion, the scientific method is absolutely indispensable to make sense of Big Data, i.e. to refine raw data into reliable information and useful knowledge. Hence, Big Data is not the end of theory, but rather the beginning.

A good example to illustrate this is the example of flu prediction. When the spatio-temporal spreading of the flu is studied, one will often find a wide scattering of the data and a low predictive power. This is related to the fact that the spreading of the flu is related to air travel. However, it is possible to use data of the passenger volumes of air travel to define an effective distance between cities, where cities with high mutual passenger flows are located next to each other. In this effective distance representation, the spreading pattern becomes circular and predictable. This approach makes it possible to identify the likely city in which a new disease emerged and to forecast the likely order in which cities will be suffering from the flu. Hence, it is possible to take proactive measures to fight the disease more effectively.

Pitfalls of Machine Learning and Artificial Intelligence

With the rise of machine learning methods, new hopes emerged that the previously mentioned problems could be overcome with Artificial Intelligence (AI). The expectation was that, AI systems would sooner or later become superintelligent and capable of performing any task better than humans, at least any specialized task.

In fact, AI systems are now capable of performing many diagnoses more reliably than doctors, e.g. diagnoses of certain kinds of cancer. Such applications can certainly be of tremendous use.

However, AI systems will make errors, too, just perhaps with lower frequency. So, decisions or suggestions of AI systems must be critically questioned, particularly when a decision may have large-scale impact, i.e. when a single mistake can potentially create large damage. This is necessary also because of a serious weakness of most of today’s AI systems: they do not explain how they come to their conclusions. For example, they do not tell us what is the likelihood that the suggestion is based on a spurious correlation. In fact, if AI systems turn correlations into laws (as cybernetic control systems or autonomous systems may do), this could eliminate important freedoms of decision-making.

Last but not least, it has been found that not only humans, but also AI systems can be manipulated. Moreover, intelligent machines are not necessarily objective and fair: they may discriminate people. For example, it has been shown that people of color and women are potential victims of such discrimination, in part because AI systems are typically trained with biased, historical data. So, machine bias is a frequent, undesired side effect and it is a serious risk of machine learning, which must be tested for and properly counter-acted.

Thursday, 28 June 2018

Künstliche Intelligenz kann eine Chance für uns alle sein

Von Dirk Helbing 
(ETH Zürich, TU Delft, Complexity Science Hub Vienna)
 

Es war lange ein Traum des Silicon Valleys, Künstliche Intelligenz (KI) zu bauen, die intelligenter als Menschen ist und die Probleme löst, die uns Menschen über den Kopf gewachsen sind. KI hätte unsere menschlichen Fehler nicht, dachte man. Sie wäre objektiv, fair, und unemotional, könnte viel mehr Wissen überschauen, schneller entscheiden und aus Daten lernen, die in der ganzen Welt gesammelt werden. Städte könnte man mit Mess-Sensoren versehen und automatisieren. Am Ende stünde eine Smarte Gesellschaft, die sich datengetrieben und algorithmen-gesteuert optimal entwickelt. Wir müssten nur tun, was uns das Smartphone sagt. Verhaltenssteuerung durch personalisierte Information und den berühmtberüchtigten chinesischen Citizenscore, ein Punktekonto für das Wohlverhalten des Bürgers, würde für die optimale Gesellschaftssteuerung sorgen. Inzwischen ist da vielerorts Ernüchterung eingekehrt. Was einst als Utopie begann, wird heute oft als Alptraum angesehen.

Damit treten wir in eine neue Phase der Digitalisierung ein. Die Karten werden neu gemischt. Europa hat die Chance, eigene Impulse zu setzen und damit Weltmarktführer zu werden – durch Künstliche Intelligenzsysteme, die Menschen nicht überwachen und kontrollieren, sondern die Menschen befähigen und kreative Aktivitäten koordinieren. Die Rede ist nun vom „werte-sensitiven Design“. Gemeint ist: wir sollten unsere verfassungsrechtlichen, sozialen, ökologischen und kulturellen Werte in die intelligenten Informationsplattformen einbauen, damit sie uns dabei unterstützen, unsere gesellschaftlichen Ziele zu erreichen, aber Freiräume für Kreativität und Innovation lassen.

Wenn es um demokratische Werte geht, so sind etwa die folgenden Aspekte von Bedeutung: Menschenrechte und Menschenwürde, Freiheit, (informationelle) Selbstbestimmung, Pluralismus, Minderheitenschutz, Gewaltenteilung, Checks and Balances, Mitwirkungsmöglichkeiten, Transparenz, Fairness, Gerechtigkeit, Legitimität, anonyme und gleiche Stimmrechte und Privatsphäre im Sinne von Schutz vor Exponierung und Missbrauch einerseits, andererseits im Sinne eines Rechts, in Ruhe gelassen zu werden.

Im globalen Miteinander scheinen überdies folgende Werte eine vielversprechende Basis für eine erfolgreiche und friedliche, vernetze Informationsgesellschaft zu sein: Vielfalt, Respekt, Partizipationschancen, Selbstbestimmung, Verantwortung, Qualität, Awareness, Fairness, Schutz, Resilienz, Nachhaltigkeit und Compliance.

Es ist nicht leicht, diese Eigenschaften in Informationssysteme einzubauen, aber wir können es lernen. Wir können KI-Systeme bauen, welche die Welt und uns alle voranbringen, vorausgesetzt es gibt einen breiten und fairen Zugang zu den Potenzialen dieser Systeme. Stellen Sie sich vor, die KI würde Ihnen nicht sagen, was Sie tun sollen, sondern sie würde Ihnen dabei helfen, Ihre eigenen Talente zu entfalten und Ihre Ziele zu erreichen, und zwar umso mehr, je mehr sie (auch) anderen helfen – sozusagen ein Geist aus der Flasche, der Gutes tut, der uns hilft, uns selbst und anderen zu helfen.

Was sich heute noch wie Utopie oder Science Fiction anhört – schon bald könnte es Realität sein. KI ist eine Chance für die Wirtschaft, für Europa und uns alle, wenn wir nur lernen damit umzugehen – damit es nicht ausgeht wie mit Goethe’s Zauberlehrling. Die Enquete-Kommission „Künstliche Intelligenz – gesellschaftliche Verantwortung und wirtschaftliche Potenziale“ hat jetzt die Chance, die Weichen für eine vielversprechende, bessere Zukunft zu stellen.

Thursday, 19 April 2018

Nudging – the Tool of Choice to Steer Consumer Behavior? Or What?


By Dirk Helbing (ETH Zurich/TU Delft/Complexity Science Hub Vienna)

Back in 2008, Thaler and Sunstein suggested “nudging” would be a great new way to improve health, wealth and happiness. The method was euphemistically called “liberal paternalism”, i.e. the nudger would be like a caring father, while the nudged one is claimed to have all the freedom to decide as preferred, even though he or she would often not notice he or she was tricked.


People would be helped by companies or the state with subconscious nudges to correct their so-called “misbehaviour”. This earned Richard Thaler the Nobel Prize – but not Cass Sunstein, who had written a critical book in the meantime, entitled “The Ethics of Influence”.


Let me say upfront that I don’t see a problem with putting the ecological energy mix on the top of a choice list or to label it “green energy”. This is pretty harmless. People understand the trick, but they will often anyway agree.


However, nobody ever told us that we would be nudged every day, all the time, with personalized information that is tailored to us with personal data that was collected about us mainly without our knowledge and agreement – effectively by means of mass surveillance.


This “big nudging”, which combines nudging with big (personal) data, must be criticized, as it undermines the very basis of our democracy, self-control, and human dignity.


Let us look back for a moment.


Already in the 60ies, the first climate studies by oil companies pointed out that there is a negative effect of carbon-based energy on climate. But for a long time, it seems nothing was done to change this.


Then, in the early 70ies, the Limits to Growth study warned us that, in a world with limited resources, we would sooner or later run into an economic and population collapse. No matter how the model parameters were changed, the predictions said humanity was doomed.


The Global 2000 study commissioned by then US president Jimmy Carter basically confirmed these predictions. However, it was again assumed that we would not change the system of equations, i.e. the socio-economic system we live in.


Finally, the United Nations established the Agenda 2030, pressing for urgent measures towards a sustainable planet.


So, 50 years after our sustainability problem was diagnosed, is “big nudging” really the best solution to our sustainability problems? Should companies digitally steer the behaviours of the people?


This kind of assumes that companies would be the good guys, who do the right things and should therefore have all conceivable freedoms: in particular, they should develop, produce and sell products as they like. The people, in contrast, would be kind of the bad guys, who show “misbehaviour”, as Richard Thaler would call it, and whose behaviours would therefore have to be corrected and controlled.


What would this mean? Let me give two examples:

  • The above approach foresees that producers of sweet lemonades would sell unhealthy products and advertise for more consumption, while our health insurance would give us minus points for buying and drinking lemonades, and charge us higher tariffs. 
  • The car industry would go on selling as many cars as they could, but politics or some citizen score would forbid most of us to use them most of the time. The Diesel scandal, which will forbid many car owners to use their cars in central parts of many cities, would be just a glimpse of what is to come.

Does such a model make sense? I am not convinced. Are you?


So, is the proposed solution, which comes under names such as profiling, targeting, neuromarketing, persuasive computing, big nudging, and scoring, really our saviour?


Unfortunately, as advanced as these technologies may be, they tend to be totalitarian in nature.


The Chinese Citizen Score, for example, has been heavily criticized by all major Western media.


But the situation in Western democracies is not so much different. Tristan Harris, who worked in a “control room” at Google, where public discourse was shaped, recently exposed the mind control of billions of people that a few tech companies exert every day.


Moreover, if one traces back the actors and history of the underlying technologies and science, we end up in the 1930ies with their infamous behavioural experiments. This link to fascist times and thinking doesn’t make things better.


How could things come that far?


We are living in a society, which thrives on the combination of two very successful systems: capitalism and democracy.


Unfortunately, this model is not good enough anymore. It hasn’t created a sustainable future, and so, as I have pointed out before, our world is heading for a doomsday scenario, if we don’t change our system.


Unfortunately also, neither the public nor scientists were informed well enough that – in the past 50 years – we should have done nothing else than re-invent society.


Furthermore, unfortunately, democracy and capitalism today do not have aligned goals. Capitalism tries to maximize profit, i.e. a one-dimensional quantity, while democracy should continuously increase human dignity, i.e. strive for multiple goals, including knowledge, health, well-being, empathy, peace, and opportunities to unfold individual talents.


Everyone should have understood that, if we did not manage to align the goals of both systems, one system would sooner or later crush the other system. It recently often appears it is democracy that would be crushed.


Let me shortly talk about the new kind of data-driven society that was created:


We now have a new monetary system, which is based on data. Data is the new oil. This data is mined by what we call “surveillance capitalism”, where people are the product.


We also live in a new kind of economy: the attention economy. People are flooded with information. Attention became a rare good, which is marketed among companies. This allows them to influence people’s consumption, opinions, emotions, decisions and behaviours.


We further have a new legal system: “code is law”. Algorithms decide what we can do and what we can’t. They are the new “laws of our society”. “Precrime” programs are just one example for this. The algorithmic laws, however, are usually not passed by our parliaments.


Altogether, this has also lead to a new political system: where companies such as Cambridge Analytica, Facebook and Google manipulate the choices of voters, and thereby undermine democracies and the free, unbiased competition of ideas.  


A digital sceptre, enabled by the combination of big data and nudging, would now allow to steer society and correct the claimed misbehaviours of people, as it is currently tested in China.


This “brave new world” was created without asking the people. It hasn’t been passed by parliament – at least not openly. While these developments have gone on for more than 15 years now, probably for decades, the public media have not informed us well and in advance.


We have been sleep-walking – and for a long time, we have not noticed the silent coup that was going on. But now we are discussing these developments, and that’s why democracy will win.


What do we need to do?


We must build “democratic capitalism”. This means to democratically upgrade capitalism and to digitally upgrade democracy.


We need information platforms and technologies, which have our constitutional, societal, cultural and ecological values in-built. We call this approach “design for values”.


And it’s coming. The IEEE, the biggest international association of engineers, is already working on standards for ethically aligned design.


What does design for values mean for our society? That the democratic principles, i.e. the lessons that we have learned over hundreds of years in terrible wars and bloody revolutions, would have to be built into our technologies.


This includes: human rights and human dignity, freedom and self-determination, pluralism and protection of minorities, the division of power, checks and balances, participatory opportunities, transparency, fairness, justice, legitimacy, anonymous and equal votes, as well as privacy in the sense of protection from misuse and exposure, and a right to be left alone.


How to enable informational self-determination in a big data world? Assume every one of us would have a personal data mailbox, where all the data created about us would have to be sent. The principle to be legally and technologically established would be that, in the future, we decide who is allowed to use what data for what purpose, period of time, and price. An AI-based digital assistant would help us administer our data according to our privacy and other preferences. Uses of personal data, also statistics created for science and for politics, would have to be transparently reported to the data mailbox.

With this approach, all personalized products and services would be possible, but companies would have to ask in advance and gain the trust of the people. This would create a competition for trust and eventually a trust-based digital society, in which we all want to live in.


Furthermore, we would have to upgrade our financial system towards a multi-dimensional real-time feedback system, as it can now be built by means of the Internet of Things and Blockchain Technology. Such a multi-dimensional incentive and coordination system is needed to manage complex systems more successfully and also to enable self-organizing, self-regulating systems.


So, assume we would measure – on separate scales – the externalities of our behaviour on the environment and other people, for example, noise, CO2 and waste produced, or knowledge, health, and the re-use of waste created. Suppose also that people would give these externalities a value or price in a subsidiary decision-process. (Some people would call this a tokenization of our world.) Then we could build our value system into our future financial system. I call this system the socio-ecological finance and coordination system (or finance system 4.0+).


People could then earn money with recycling. Companies could earn money for environmental-friendly or socially responsible production. In this way, new market forces would be unleashed that would let a circular and sharing economy emerge over time.


Personally, I don’t think there are not enough resources for everyone in the world. We don’t have an over-population problem. Our problem is rather that the organization of our economy is outdated.


I think we are living in a time, where we have to fundamentally re-organize our society and economy in the spirit of democratic capitalism, based on the values of our society.


I am also convinced that energy won’t be the bottleneck. But we will have to take new avenues. In the past, the focus was often on big solutions, which would produce energy for a lot of people. I propose that we should focus more on solutions, which are oriented at decentralized, local and more democratic energy production.


Modern physics knows that our universe is full of energy. In fact, it is totally made up from energy. It wouldn’t be plausible to assume we could not learn to use it.


I expect that a more democratic production and use of energy, goods and services will lead our society to an entirely new level. It is high time to focus on this transition, and how we can accomplish it together.


The instrument of City Olympics, i.e. of competitions of cities for sustainable and resilient open-source solutions to the world’s pressing problems could help us find the way.