Tuesday, 9 July 2013

From Technology-Driven Society to Socially Oriented Technology-The Future of Information Society - Alternatives to Surveillance

by Dirk Helbing (ETH Zurich)


Our society is changing. Almost nothing these days works without a computer chip; computing power doubles every 18 months, and in ten years it will probably exceed the capabilities of a human brain. Computers perform approximately 70 percent of all financial transactions today and IBM's Watson now seems to give better customer advise than some human telephone hotlines.

The forthcoming economic and social transformation might be more fundamental than the one resulting from the invention of the steam engine. Meanwhile, the storage capacity of data grows even faster than the computational capacity. Within a few years, we will generate more data than in the entire history of humankind. The "Internet of Things" will soon network trillions of sensors together - fridges, coffee machines, electric toothbrushes and even our clothes. Vast amounts of data will be collected. Already, Big Data is being heralded as the oil of the 21st Century.

But this situation will also make us vulnerable. Exploding cyber-crime, economic crises and social protests show that our hyper-connected world is destabilizing. However, is a Surveillance Society the right answer? When all our Internet queries are stored, when our purchases and social contacts are evaluated, when our emails and files are scanned for search terms, and when countless innocent citizens are classified as potential future terrorists, we must ask: Where will this lead to? And where will it end?

Will surveillance lead to self-censorship and discrimination against intellectuals and minorities, even though innovation and creative thinkers are bitterly needed for our economy and society to do well in our changing world? Will free human expression eventually be curtailed by data mining machines analyzing our digital trails?

What are the consequences, say if even the Swiss banks and the U.S. government can no longer protect their secrets, or if our health and other sensitive data is sold on? Or if politically and commercially sensitive strategies can be monitored in real time? What if insider knowledge can be used to undermine fair competition and justice?

The recent allegations that information agencies of various states snoop secretly into the activities of millions of ordinary people has alarmed citizens and companies alike. The moral outrage in response to the surveillance activity has made it clear that it is not a technology-driven society that we need, but instead, a socially-oriented technology, as outlined below. We must recognize that technology without consideration of ethical issues, or without transparency and public discussions can lead us astray. Therefore a new approach to personal data and its uses is required so that we can safely benefit from the many new economic and social opportunities that it can provide.

First, we need a public ethical debate on the concepts of privacy and ownership of data, even more urgently than in bioethics. Important questions that we have to ask are: How do we create opportunities arising in the information age for all, but yet still manage the downside risks and challenges - from cyber-crime to the erosion of trust and democratic rights? Do we really need so much security that we must be afraid of data mining algorithms flagging the activities of millions of ordinary people as suspicious? And what kinds of new institutions would we need in the 21 century?

In the past we have built public roads, parks and museums, schools, libraries and universities. Now, more than ever, we need strategies that protect us against the misuse of data, and that are intended to create transparency and trust. These strategies must place citizen benefits and rights of self-determination at the very core. In addition, we must develop new institutions to provide oversight and control of the new challenges brought by the data revolution. Here are some concrete institutional proposals:

Self-determined use of personal data: Already some time ago, the World Economic Forum (WEF) called for a "New Deal on Data" . It stated that the sustainable use of the economic opportunities of personal data requires a fair balance between economic, governmental and individual interests. A solution would be to return control over personal data to the respective individuals, i.e. give people ownership of their data: the right to possess, access, use and dispose. In addition, individuals should be able to participate in their economic profits. This would require new data protocols and the support of legislation.

Trusted information exchange: As the vulnerability of existing systems and the proliferation of cyber-crime indicates, a new network architecture is urgently needed. The handling of sensitive data requires secure encryption, anonymisation and protected pseudonyms, decentralized storage, open software codes and transparency on the use of data, correction possibilities, mechanisms of forgetting, and a protective "digital immune system."

Credibility mechanisms: Social mechanisms such as reputation, as seen in the evaluation of information and information sources on the internet, can play a central role in reducing abuse. But remember that the wisdom of crowds only works if individual decisions are not manipulated. Therefore, to be effective, individuals must be given control over the recommendation mechanisms, data filtering and search routines they use, such that they can take decisions based on their own values and quality criteria.

Participatory platforms: All over the world people desire increased participation, from consumption to production processes. Now, modern technology allows for the direct social, economic, and political participation of engaged individuals. A basic democracy approach as in Switzerland, where people can decide themselves about many laws, not just political representatives, would be feasible on much larger scales. We also witness an economic trend towards local production, ranging from solar panels to 3D Printers. It can be become a good complement of mass production.

Open Data: The innovation ecosystem needs open data and open standards to flourish. Open data enable the rapid creation of new products, which stimulates further products and services. Information is the best catalyst for innovation. Of course, data providers should be adequately compensated, and not all data would have to be open.

Innovation Accelerator: To keep pace with our changing world, we need to reinvent the innovation process itself. A participatory innovation process would allow ideas to be implemented faster and external expertise to be integrated more readily. Information is an extraordinary resource: it does not diminish when shared, and it can be infinitely reproduced. Why shouldn't we use this opportunity?

Social Capital: Information systems can support diverse types of social capital such as trust, reputation, and cooperation. Based on social network interactions, they are the foundation of a flourishing economy and society. So, let's create new value!

Social Technologies: Finally, we must learn to build information systems that are compatible with our individual, social and cultural values. We need to design systems that respect the privacy of citizens and prevent fear and discrimination, while promoting tolerance, trust, and fairness. What solutions can we offer users to ensure that information systems are not misused for unjustified monitoring and manipulation? For a well-functioning society, socio-diversity (pluralism) must be protected as much as biodiversity. Both determine the potential for innovation.

These are just some examples of the promising ways in which we could use the Internet of the future. Among all these, a surveillance society is probably the worst of all uses of information technology. A safe and sustainable information society has to be built on reputation, transparency and trust, not mass surveillance.

If we can no longer trust our phones, computers or the Internet, we will either switch off our equipment or start to behave like agents of a secret service: revealing as little information as possible, encrypting data, creating multiple identities, laying false traces.

Such behaviour would create little benefits for ordinary citizens, besides protection, but might help criminals to hide. It would be a pity if we failed to use the opportunities afforded by the information age, just because we did not think hard or far enough about the technological and legal frameworks and institutions needed.

The information age is now at a crossroad. It may eventually lead us to a totalitarian surveillance state, or we can use it to enable a creative, participatory society. It is our decision, and we should not leave it to others.


It is also time to build the institutions for the globalized information society to come, in a world-wide collaboration, instead of starting a global war of information systems.




Related Readings – by Dirk Helbing

Google as God? Opportunities and Risks of the Information Age

Qualified Trust, not Surveillance, is the Basis for a Stable Society

Why Mass Surveillance Does Not Work

How to Ensure that the European Data Protection Legislation Will Protect the Citizens



Other Related Readings

Statement by Vice President Neelie Kroes "on the consequences of living in an age of total information" 04/07/2013

Consumer Data Privacy In A Networked World: A Framework For Protecting  Privacy And Promoting Innovation  In The Global Digital Economy

Big Data Is Opening Doors, but Maybe Too Many

Personal Data: The Emergence of a New Asset Class

The Global Information Technology Report 2008–2009 Mobility in a Networked World


Thursday, 27 June 2013

Why Mass Surveillance Does Not Work

by Dirk Helbing (ETH Zurich, dhelbing@ethz.ch)

These days, it is often claimed that we need massive surveillance to ensure a high level of security. While the idea sounds plausible, I will explain, why this approach cannot work well, even when secret services have the very best intentions, and their sensitive knowledge would not be misused. This is a matter of statistics - no method is perfect.
  
For the sake of illustration, let us assume there are 2000 terrorists in a country with 200 Mio. inhabitants. Moreover, let us assume that the secret service manages to identify terrorists with an amazing 99% accuracy. Then, there are 1% false negatives (type II error), which means that 20 terrorists are not detected, while 1980 will be caught. The actual numbers are much smaller. It has been declared that 50 terror acts were prevented in about 12 years, while a few terrorist attacks could not be stopped (although the terrorists were often listed as suspects).
  
It is also important to ask, how many false positives ("false alarms") do we have? If the type I error is just 1 out of 10,000, there will be 20,000 wrong suspects, if it is 1 permille, there will be 200,000 wrong suspects, and if it is 1 percent, it will be 2 million false suspects. Recent figures I have heard of on TV spoke of 8 Million suspects in the US in 1996, which would mean about a 4 percent error rate. If these figures are correct, this would mean that for every terrorist, 4000 times as many innocent citizens would be wrongly categorized as (potential) terrorists.
  
Hence, large-scale surveillance is not an effective means of fighting terrorism. It rather tends to restrict the freedom rights of millions of innocent citizens. It is not reasonable to apply surveillance to the whole population, for the same reasons, why it is not sensible to make a certain medical test with everybody. There would be millions of false positives, i.e. millions of people who would be wrongly treated, with negative side effects on their health. For this reason, patients are tested for diseases only if they show worrying symptoms.
  
In the very same way, it creates more harm than benefit, if everybody is being screened for being a potential future terrorist. This will cause unjustified discrimination and harmful self-censorship at times, where unconventional, new ideas are needed more than ever. It will impair the ability of our society to innovate and adapt, thereby promoting instability. Thus, it is time to pursue a different approach, namely to identify the social, economic and political factors that promote crime and terrorism, and to change these factors. Just 2 decades back, we saw comparatively little security problems in most modern societies. Overall, people tolerated each other and coexisted peacefully, without massive surveillance and policing. We were living in a free and happy world, where people of different cultural backgrounds respected each other and did not have to live in fear. Can we have this time back, please?

Reference:

Type I and type II errors, see https://en.wikipedia.org/wiki/Type_I_and_type_II_errors

Monday, 17 June 2013

How to Ensure that the European Data Protection Legislation Will Protect the Citizens

by Dirk Helbing (ETH Zurich, dhelbing@ethz.ch)
(an almost identical version has been forwarded to some Members of the European Parliament on April 7, 2013)


Some serious, fundamental problems to be solved 

The first problem is that, when two or more anonymous data sets are being combined, this may allow deanonymization, i.e. the identification of the individuals of which the data have been recorded. Mobility data, in particular, can be easily deanonymized.

A second fundamental problem is that it must be assumed that the large majority of people in developed countries, including the countries of the European Union, have already been profiled in detail, given that individual devices can be identified with high accuracy through individual configurations (including software used and their configurations). There are currently about 700 Million commercial data sets about users specifying an estimated number of 1500 variables per user.

A third problem is that both, the CIA and the FBI have revealed that, besides publicly or semipublicly available data in the Web or Social Media, they are or will be storing or processing private data including Gmail and Dropbox data. The same applies to many secret services around the world. It has also become public that the NSA seems to collect all data they can get hold of.

A fourth fundamental problem is that Europe currently does not have the technical means, algorithms, software, data and laws to counter foreign dominance regarding Big Data and its potential misuse.

General principles and suggested approach to address the above problems


The age of information will only be sustainable, if people can trust that their data are being used in their interest. The spirit and goal of data regulations should be to ensure this.

Personal data are data characterizing individuals or data derived from them. People should be the primary owners of their personal data. Individuals, companies or government agencies, who gather, produce, process, store, or buy data should be considered secondary owners. Whenever personal data are from European citizens, or are being stored, processed, or used in a European country or by a company operating in a European country, European law should be applied.

Individuals should be allowed to use their own personal data in any way compatible with fundamental rights (including sharing them with others, for free or at least for a small monthly fee covering the use of ALL their personal data – like the radio and TV fee). [Note: This is important to unleash the power of personal data to the benefit of society and to close the data gap that Europe has.]

Individuals should have a right to access a full copy of all their personal data through a central service and be suitably protected from misuse of these data.

They should have a right to limit the use of their personal data any time and to request their correction or deletion in a simple and timely way and for free.

Fines should apply to any person or company or institution having or creating financial or other advantages by the misuse of personal data.

Misuse includes in particular sensitive use that may have a certain probability of violating human rights or justified personal interests. Therefore, it must be recorded what error rate the processing (and, in particular, the classification) of personal data has, specifying what permille of users feel disadvantaged.

A central institution (which might be an open Web platform) is needed to collect user complaints. Sufficient transparency and decentralized institutions are required to take efficient, timely and affordable action to protect the interest of users.

The execution of user rights must be easy, not time consuming, and cheap (essentially for free). For example, users must not be flooded with requests regarding their personal data. They must be able to effectively ensure a self-determined use of personal data with a small individual effort.

To limit misuse, transparency is crucial. For example, it should be required that large-scale processing of personal data (i.e. at least the queries that were executed) must be made public in a machine-readable form, such that public institutions and NGOs can determine how dangerous such queries might be for individuals.

Proposed definitions

As indicated above, there is practically no data that can not be deanonymized, if combined with other data. However, the following definition may be considered to be a practical definition of anonymity:

Anonymous data are data in which a person of interest can only be identified with a probability smaller than 1/2000, i.e. there is no way to find out which one among two thousand individuals has the property of interest.
Hence, the principles is that of diluting persons with a certain property of interest by 2000 persons with significantly other properties in order to make it unlikely to identify persons with the property of interest. This principle is guided by the way election data or other sensitive data are being used by public authorities. It also makes sure that private companies do not have a data processing advantage over public institutions (including research institutions).

I would propose to characterize pseudonymous data as data not suited to reveal or track the user and properties correlated with the user that he or she has not explicitly chosen to reveal in the specific context. I would furthermore suggest to characterize pseudonymous transactions as processing and storing the minimum amount of data required to perform a service requested by a user (which particularly implies not to process or store technical details that would allow one to identify the device and software of the user). Essentially, pseudonymous transactions should not be suited to identity the user or variables that might identify him or her. Typically, a pseudonym is a random or user-specified variable that allows one to sell a product or perform a service for a user anonymously, typically in exchange for an anonymous money transfer.

To allow users to check pseudonymity, the data processed and stored should be fully shared with the user via an encrypted webpage (or similar) that is accessible for a limited, but sufficiently long time period through a unique and confidential decryption key made accessible only to the respective user. It should be possible for the user to easily decrypt, view, copy, download and transfer the data processed and stored by the pseudonymous transaction in a way that is not being tracked.

Further information:


Difficulty to anonymize data 

Danger of surveillance society
New deal on data, how to consider consumer interests 
  • HP software allowing personalized advertisement without revealing personal data to companies, contact: Prof. Dr. Bernardo Huberman: huberman@hpl.hp.com
FuturICT initiative www.futurict.eu
Information on the proposer

Dirk Helbing is Professor of Sociology, in particular of Modeling and Simulation, and member of the Computer Science Department at ETH Zurich. He is also elected member of the German Academy of Sciences. He earned a PhD in physics and was Managing Director of the Institute of Transport & Economics at Dresden University of Technology in Germany. He is internationally well-known for his work on pedestrian crowds, vehicle traffic, and agent-based models of social systems. Furthermore, he is coordinating the FuturICT Initiative (www.futurict.eu), which focuses on the understanding of techno-socio-economic systems, using Big Data. His work is documented by hundreds of well-cited scientific articles, dozens of keynote talks and hundreds of media reports in all major languages. Helbing is also chairman of the Physics of Socio-Economic Systems Division of the German Physical Society, co-founder of ETH Zurich’s Risk Center, and elected member of the World Economic Forum’s Global Agenda Council on Complex Systems.

Saturday, 8 June 2013

Qualified Trust, not Surveillance, is the Basis for a Stable Society - Dirk Helbing

Peaceful citizens and hard-working taxpayers are under government surveillance. Confidential communication of journalists is intercepted. Civilians are killed by drones, without a chance to prove their innocence.[1] How could it come that far? Since September 11, freedom rights have been restricted in most democracies step by step. Each terrorist threat has delivered new reasons to extend the security infrastructure, which is eventually reaching Orwellian dimensions. Through its individual configuration, every computer has an almost unique fingerprint, allowing one to record our use of the Web. Privacy is gone. Over the past years, up to 1500 variables about half a billion citizens in the industrial world have been recorded. Google and Facebook know us better than our friends and families.

Nevertheless, governments have failed so far to gain control of terrorism, drug traffic, cybercrime and tax evasion. Would an omniscient state be able to change this and create a new social order?[2] It seems at least to be the dream of secret services and security agencies.   
Ira "Gus" Hunt, the CIA Chief Technology Officer, recently said:[3]

"You're already a walking sensor platform… You are aware of the fact that somebody can know where you are at all times because you carry a mobile device, even if that mobile device is turned off. You know this, I hope? Yes? Well, you should… Since you can't connect dots you don't have, it drives us into a mode of, we fundamentally try to collect everything and hang on to it forever… It is really very nearly within our grasp to be able to compute on all human generated information." 

Unfortunately, connecting the dots often does not work. As complex systems experts point out, such "linear thinking" can be totally misleading. It's the reason why we often want to do the right things, but take the wrong decisions.

I agree that our world has destabilized. However, this is not a result of external threats, but of system-immanent feedback effects. The increasing interdependencies, connectivity and complexity of our world and further trends are causing this.[4] However, trying to centrally control this complexity is destined to fail. We must rather learn to embrace the potential of complexity. This requires a step towards decentralized self-regulatory approaches. Many of us believe in Adam Smiths "invisible hand", according to which the best societal and economic outcome is reached, if everybody is just doing what is best for himself or herself. However, this principle is known to produce "market failures", "financial meltdowns", and other "tragedies of the commons" (such as environmental degradation) under certain circumstances. The classical approach is to try to "fix" these problems by top-down regulation of a powerful state.

However, self-regulation based on decentralized rules can be learned. This has been demonstrated for modern traffic control concepts, but it's equally relevant for smart grids, and will be even more important for the financial system. The latter, for example, needs built-in breaking points similar to the fuses in our electrical network at home, and it requires additional control parameters to equilibrate. 

There is an alternative to uncoordinated bottom-up organization and too much top-down regulation -- a better one: the "economy 2.0". Doing the step towards a self-regulating, participatory market society can unleash the unused potential of complexity and diversity, which we are currently trying to fight.[5] This step can boost our societies and economies as much as the transition from centrally regulated societies to market societies inspired by Adam Smith. But after 300 years, it's now time for a new paradigm. Societies based on surveillance and punishment are not long-term sustainable. When controlled, people get angry, and the economy never thrives. Qualified trust is a better basis of resilient societies. But how to build it? Reputation systems are now spreading all over the web. If properly designed, they could be the basis of a self-regulating societal and market architecture. Further success principles of decentralized self-regulating systems can be learned from ecological and immune systems. They can also be a basis for a trustable Web, which can successfully neutralize harmful actions and contain cybercrime.   

Rather than in surveillance technology, government should invest their money in the creation of self-regulating architectures. It will be crucial for a successful transition to a new era -- the era of information societies. If we take the right decisions, the 21st century can be an age of creativity, prosperity and participation. But if we take the wrong decisions, we will end in economic and democratic depression. It's our choice.



[2] The subject is discussed in my essay "Google as God?", see http://arxiv.org/abs/1304.3271
[4] D. Helbing, Globally Networked Risks and How to Respond, Nature 497, 51-59 (2013), see http://www.nature.com/nature/journal/v497/n7447/full/nature12047.html
[5] D. Helbing, Economics 2.0: The Natural Step towards a Self-Regulating, Participatory Market Society (2013), see http://arxiv.org/pdf/1305.4078v2.pdf

Thursday, 2 May 2013


Global Networks Must be Re-Designed

The increasing interdependencies between the world’s technological, socio-economic, and environmental systems have the potential to create global catastrophic risks. We may have to re-design many global networks, concludes Professor Dirk Helbing at ETH Zurich’s Risk Center in this week’s issue of Nature. Otherwise they could turn into “global time bombs”.


Living in a Hyper-Connected World

Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in man-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems.

Helbing’s publication illustrates how cascade effects and complex dynamics amplify the vulnerability of networked systems. For example, just a few long-distance connections can largely decrease our ability to mitigate the threats posed by global pandemics. Initially beneficial trends, such as globalization, increasing network densities, higher complexity, and an acceleration of institutional decision processes may ultimately push man-made or human-influenced systems towards systemic instability, Helbing finds. Systemic instability refers to a system, which will get out of control sooner or later, even if everybody involved is well skilled, highly motivated and behaving properly. Crowd disasters are shocking examples illustrating that many deaths may occur even when everybody tries hard not to hurt anyone.

Our Intuition of Systemic Risks Is Misleading

Networking system components that are well-behaved in separation may create counter-intuitive emergent system behaviors, which are not well-behaved at all. For example, cooperative behavior might unexpectedly break down as the connectivity of interaction partners grows. “Applying this to the global network of banks, this might actually have caused the financial meltdown in 2008,” believes Helbing.

Globally networked risks are difficult to identify, map and understand, since there are often no evident, unique cause-effect relationships. Failure rates may change depending on the random path taken by the system, with the consequence of increasings risks as cascade failures progress, thereby decreasing the capacity of the system to recover. “In certain cases, cascade effects might reach any size, and the damage might be practically unbounded,” says Helbing. “This is quite disturbing and hard to imagine.” All of these features make strongly coupled, complex systems difficult to predict and control, such that our attempts to manage them go astray.

“Take the financial system,” says Helbing. “The financial crisis hit regulators by surprise.” But back in 2003, the legendary investor Warren Buffet warned of mega-catastrophic risks created by large-scale investments into financial derivatives. It took 5 years until the “investment time bomb” exploded, causing losses of trillions of dollars to our economy. “The financial architecture is not properly designed,” concludes Helbing. “The system lacks breaking points, as we have them in our electrical system.” This allows local problems to spread globally, thereby reaching catastrophic dimensions.

A Global Ticking Time Bomb?

Have we unintentionally created a global time bomb? If so, what kinds of global catastrophic scenarios might humans face in complex societies? A collapse of the world economy or of our information and communication systems? Global pandemics? Unsustainable growth or environmental change? A global food or energy crisis? A cultural clash or global-scale conflict? Or will we face a combination of these contagious phenomena – a scenario that the World Economic Forum calls the “perfect storm”?

“While analyzing such global risks,” says Helbing, “one must bear in mind that the propagation speed of destructive cascade effects might be slow, but nevertheless hard to stop. It is time to recognize that crowd disasters, conflicts, revolutions, wars, and financial crises are the undesired result of operating socio-economic systems in the wrong parameter range, where systems are unstable.” In the past, these social problems seemed to be puzzling, unrelated, and almost “God-given” phenomena one had to live with. Nowadays, thanks to new complexity science models and large-scale data sets (“Big Data”), one can analyze and understand the underlying mechanisms, which let complex systems get out of control.

Disasters should not be considered “bad luck”. They are a result of inappropriate interactions and institutional settings, caused by humans. Even worse, they are often the consequence of a flawed understanding of counter-intuitive system behaviors. “For example, it is surprising that we didn’t have sufficient precautions against a financial crisis and well-elaborated contingency plans,” states Helbing. “Perhaps, this is because there should not be any bubbles and crashes according to the predominant theoretical paradigm of efficient markets.” Conventional thinking can cause fateful decisions and the repetition of previous mistakes. “In other words: While we want to do the right thing, we often do wrong things,” concludes Helbing. This obviously calls for a paradigm shift in our thinking. “For example, we may sanction deviations from social norms to promote social order, but may trigger conflict instead. Or we may increase security measures, but get more terrorism. Or we may try to promote innovation, but suffer economic decline, because innovation requires diversity more than homogenization.”

Global Networks Must Be Re-Designed

Helbing’s publication explores why today’s risk analysis falls short. “Predictability and controllability are design issues,” stresses Helbing. “And uncertainty, which means the impossibility to determine the likelihood and expected size of damage, is often man-made.” Many systems could be better managed with real-time data. These would allow one to avoid delayed response and to enhance the transparency, understanding, and adaptive control of systems. However, even all the data in the world cannot compensate for ill-designed systems such as the current financial system. Such systems will sooner or later get out of control, causing catastrophic man-made failure. Therefore, a re-design of such systems is urgently needed.

Helbing’s Nature paper on “Globally Networked Risks” also calls attention to strategies that make systems more resilient, i.e. able to recover from shocks. For example, setting up backup systems (e.g. a parallel financial system), limiting the system size and connectivity, building in breaking points to stop cascade effects, or reducing complexity may be used to improve resilience. In the case of financial systems, there is still much work to be done to fully incorporate these principles.

Contemporary information and communication technologies (ICT) are also far from being failure-proof. They are based on principles that are 30 or more years old and not designed for today’s use. The explosion of cyber risks is a logical consequence. This includes threats to individuals (such as privacy intrusion, identity theft, or manipulation through personalized information), to companies (such as cybercrime), and to societies (such as cyberwar or totalitarian control). To counter this, Helbing recommends an entirely new ICT architecture inspired by principles of decentralized self-organization as observed in immune systems, ecology, and social systems.

Coming Era of Social Innovation

Socio-inspired technologies built on decentralized mechanisms that create reputation, trust, norms or culture will be able to generate enormous value. “Facebook, based on the simple principle of social networking, is worth more than 50 billion dollars,” Helbing reminds us. “ICT systems are now becoming artificial social systems. Computers already perform the great majority of financial transactions, which humans carried out in the past.” But if we do not understand socially interactive systems well, coordination failures, breakdowns of cooperation, conflict, cyber-crime or cyber-war may result.

Therefore, a better understanding of the success principles of societies is urgently needed. “For example, when systems become too complex, they cannot be effectively managed top-down” explains Helbing. “Guided self-organization is a promising alternative to manage complex dynamical systems bottom-up, in a decentralized way.” The underlying idea is to exploit, rather than fight, the inherent tendency of complex systems to self-organize and thereby create a robust, ordered state. For this, it is important to have the right kinds of interactions, adaptive feedback mechanisms, and institutional settings, i.e. to establish proper “rules of the game”. The paper offers the example of an intriguing “self-control” principle, where traffic lights are controlled bottom-up by the vehicle flows rather than top-down by a traffic center.

Creating and Protecting Social Capital

It is important to recognize that many 21st century challenges such as the response to global warming, energy and food problems have a social component and cannot be solved by technology alone. The key to generating solutions is a Global Systems Science (GSS) that brings together crucial knowledge from the natural, engineering and social sciences. The goal of this new science is to gain an understanding of global systems and to make “systems science” relevant to global problems. In particular, this will require the combination of the Earth Systems Sciences with the study of behavioral aspects and social factors.

“One man’s disaster is another man’s opportunity. Therefore, many problems can only be successfully addressed with transparency, accountability, awareness, and collective responsibility,” underlines Helbing. “For example, social capital is important for economic value generation, social well-being and societal resilience, but it may be damaged or exploited, like our environment,” explains Helbing. “Humans must learn how to quantify and protect social capital. A warning example is the loss of trillions of dollars in the stock markets during the financial crisis.” This crisis was largely caused by a loss of trust.

“It is important to stress that risk insurances today do not consider damage to social capital,” Helbing continues. However, it is known that large-scale disasters have a disproportionate public impact, in part because they destroy social capital. As we neglect social capital in risk assessments, we are taking excessive risks.

New Instruments for the 21st Century

Finally, to gain the urgently needed insights, the study suggests to build new instruments, as proposed by the FuturICT initiative (http://www.futurict.eu): This comprises a “Planetary Nervous Systems” (PNS) to measure the state of our planet in real-time, capturing also socio-economic trends, social capital, and the “social footprint” of human decisions and actions. These data may be fed into a “Living Earth Simulator” (LES) to study “what … if” scenarios. A “policy wind tunnel” or “socio-economic flight simulator” of this kind could provide better, evidence-based advice for decision makers, be it politicians, business leaders, or citizens. It could help us to identify opportunities and alert us of risks or unwanted side effects. Last but not least, the “Global Participatory Platform” (GPP) would open up the above-mentioned tools for everyone and support collaboration, interactive exploration, and crowd sourcing.

This bold vision can be realized, provided that we learn how to design and operate open, value-oriented ICT systems and how to promote a non-malicious and self-determined use of data. It would take a major investment: an Apollo-like project focusing on techno-socio-economic-environmental systems, life on earth and everything it relates to. Helbing is convinced: “it would be the best investment humanity can make”.



See also


Supplementary Videos:
  • http://vimeo.com/53876434 : Spreading and erosion of cooperation in a social dilemma  situation
  • http://vimeo.com/53872893 : Cascade spreading is increasingly hard to recover from as failure progresses. The simulation model mimics spatial epidemic spreading with air traffic and healing costs.


For independent opinions, you may contact the following experts:



Spreading and erosion of cooperation in a prisoner’s dilemma game. The computer simulations assume the payoff parameters T = 7, R = 6, P = 2, and S = 1 and include success-driven migration. Although cooperation would be profitable to everyone, non-cooperators can achieve a higher payoff than cooperators, which may destabilize cooperation. The graph shows the fraction of cooperative agents, averaged over 100 simulations, as a function of the connection density (actual number of network links divided by the maximum number of links when all nodes are connected to all others). Initially, an increasing link density enhances cooperation, but as it passes a certain threshold, cooperation erodes. (See http://vimeo.com/53876434 for a related movie.) The computer simulations are based on a circular network with 100 nodes, each connected with the four nearest neighbours. n links are added randomly. 50 nodes are occupied by agents. Blue circles represent cooperation, red circles non-cooperative behaviour, and black dots empty sites. Initially, all agents are non-cooperative. Their network locations and behaviours (cooperation or defection) are updated in a random sequential way in 4 steps: (1) The agent plays two-person prisoner’s dilemma games with its direct neighbours in the network. (2) After the interaction, the agent moves with probability 0.5 up to 4 steps along existing links to the empty node that gives the highest payoff in a fictitious play step, assuming that no one changes the behaviour. (3) The agent imitates the behaviour of the neighbour who got the highest payoff in step 1 (if higher than the own one). (4) The behaviour is spontaneously changed with a mutation rate of 0.1. 


Cascade spreading is increasingly hard to recover from as failure progresses. The simulation model mimics spatial epidemic spreading with air traffic and healing costs in a two-dimensional 50 × 50 grid with periodic boundary conditions and random shortcut links. The colourful inset depicts an early snapshot of the simulation with N = 2500 nodes. Red nodes are infected, green nodes are healthy. Shortcut links are shown in blue. The connectivity-dependent graph shows the mean value and standard deviation of the fraction i(t)/N of infected nodes over 50 simulation runs. Most nodes have four direct neighbours, while a few of them possess an additional directed random connection to a distant node. The spontaneous infection rate is s = 0.001 per time step; the infection rate by an infected neighbouring node is P = 0.08. Newly infected nodes may infect others or may recover from the next time step onwards. Recovery occurs with a rate q = 0.4, if there is enough budget b > c to bear the healing costs c = 80. The budget needed for recovery is created by the number of healthy nodes h(t). Hence, if r(t) nodes are recovering at time t, the budget changes according to b(t + 1) = b(t) + h(t- cr(t). As soon as the budget is used up, the infection spreads explosively. (See also the movie at http://vimeo.com/53872893 )