Monday, 17 June 2013

How to Ensure that the European Data Protection Legislation Will Protect the Citizens

by Dirk Helbing (ETH Zurich, dhelbing@ethz.ch)
(an almost identical version has been forwarded to some Members of the European Parliament on April 7, 2013)


Some serious, fundamental problems to be solved 

The first problem is that, when two or more anonymous data sets are being combined, this may allow deanonymization, i.e. the identification of the individuals of which the data have been recorded. Mobility data, in particular, can be easily deanonymized.

A second fundamental problem is that it must be assumed that the large majority of people in developed countries, including the countries of the European Union, have already been profiled in detail, given that individual devices can be identified with high accuracy through individual configurations (including software used and their configurations). There are currently about 700 Million commercial data sets about users specifying an estimated number of 1500 variables per user.

A third problem is that both, the CIA and the FBI have revealed that, besides publicly or semipublicly available data in the Web or Social Media, they are or will be storing or processing private data including Gmail and Dropbox data. The same applies to many secret services around the world. It has also become public that the NSA seems to collect all data they can get hold of.

A fourth fundamental problem is that Europe currently does not have the technical means, algorithms, software, data and laws to counter foreign dominance regarding Big Data and its potential misuse.

General principles and suggested approach to address the above problems


The age of information will only be sustainable, if people can trust that their data are being used in their interest. The spirit and goal of data regulations should be to ensure this.

Personal data are data characterizing individuals or data derived from them. People should be the primary owners of their personal data. Individuals, companies or government agencies, who gather, produce, process, store, or buy data should be considered secondary owners. Whenever personal data are from European citizens, or are being stored, processed, or used in a European country or by a company operating in a European country, European law should be applied.

Individuals should be allowed to use their own personal data in any way compatible with fundamental rights (including sharing them with others, for free or at least for a small monthly fee covering the use of ALL their personal data – like the radio and TV fee). [Note: This is important to unleash the power of personal data to the benefit of society and to close the data gap that Europe has.]

Individuals should have a right to access a full copy of all their personal data through a central service and be suitably protected from misuse of these data.

They should have a right to limit the use of their personal data any time and to request their correction or deletion in a simple and timely way and for free.

Fines should apply to any person or company or institution having or creating financial or other advantages by the misuse of personal data.

Misuse includes in particular sensitive use that may have a certain probability of violating human rights or justified personal interests. Therefore, it must be recorded what error rate the processing (and, in particular, the classification) of personal data has, specifying what permille of users feel disadvantaged.

A central institution (which might be an open Web platform) is needed to collect user complaints. Sufficient transparency and decentralized institutions are required to take efficient, timely and affordable action to protect the interest of users.

The execution of user rights must be easy, not time consuming, and cheap (essentially for free). For example, users must not be flooded with requests regarding their personal data. They must be able to effectively ensure a self-determined use of personal data with a small individual effort.

To limit misuse, transparency is crucial. For example, it should be required that large-scale processing of personal data (i.e. at least the queries that were executed) must be made public in a machine-readable form, such that public institutions and NGOs can determine how dangerous such queries might be for individuals.

Proposed definitions

As indicated above, there is practically no data that can not be deanonymized, if combined with other data. However, the following definition may be considered to be a practical definition of anonymity:

Anonymous data are data in which a person of interest can only be identified with a probability smaller than 1/2000, i.e. there is no way to find out which one among two thousand individuals has the property of interest.
Hence, the principles is that of diluting persons with a certain property of interest by 2000 persons with significantly other properties in order to make it unlikely to identify persons with the property of interest. This principle is guided by the way election data or other sensitive data are being used by public authorities. It also makes sure that private companies do not have a data processing advantage over public institutions (including research institutions).

I would propose to characterize pseudonymous data as data not suited to reveal or track the user and properties correlated with the user that he or she has not explicitly chosen to reveal in the specific context. I would furthermore suggest to characterize pseudonymous transactions as processing and storing the minimum amount of data required to perform a service requested by a user (which particularly implies not to process or store technical details that would allow one to identify the device and software of the user). Essentially, pseudonymous transactions should not be suited to identity the user or variables that might identify him or her. Typically, a pseudonym is a random or user-specified variable that allows one to sell a product or perform a service for a user anonymously, typically in exchange for an anonymous money transfer.

To allow users to check pseudonymity, the data processed and stored should be fully shared with the user via an encrypted webpage (or similar) that is accessible for a limited, but sufficiently long time period through a unique and confidential decryption key made accessible only to the respective user. It should be possible for the user to easily decrypt, view, copy, download and transfer the data processed and stored by the pseudonymous transaction in a way that is not being tracked.

Further information:


Difficulty to anonymize data 

Danger of surveillance society
New deal on data, how to consider consumer interests 
  • HP software allowing personalized advertisement without revealing personal data to companies, contact: Prof. Dr. Bernardo Huberman: huberman@hpl.hp.com
FuturICT initiative www.futurict.eu
Information on the proposer

Dirk Helbing is Professor of Sociology, in particular of Modeling and Simulation, and member of the Computer Science Department at ETH Zurich. He is also elected member of the German Academy of Sciences. He earned a PhD in physics and was Managing Director of the Institute of Transport & Economics at Dresden University of Technology in Germany. He is internationally well-known for his work on pedestrian crowds, vehicle traffic, and agent-based models of social systems. Furthermore, he is coordinating the FuturICT Initiative (www.futurict.eu), which focuses on the understanding of techno-socio-economic systems, using Big Data. His work is documented by hundreds of well-cited scientific articles, dozens of keynote talks and hundreds of media reports in all major languages. Helbing is also chairman of the Physics of Socio-Economic Systems Division of the German Physical Society, co-founder of ETH Zurich’s Risk Center, and elected member of the World Economic Forum’s Global Agenda Council on Complex Systems.

Saturday, 8 June 2013

Qualified Trust, not Surveillance, is the Basis for a Stable Society - Dirk Helbing

Peaceful citizens and hard-working taxpayers are under government surveillance. Confidential communication of journalists is intercepted. Civilians are killed by drones, without a chance to prove their innocence.[1] How could it come that far? Since September 11, freedom rights have been restricted in most democracies step by step. Each terrorist threat has delivered new reasons to extend the security infrastructure, which is eventually reaching Orwellian dimensions. Through its individual configuration, every computer has an almost unique fingerprint, allowing one to record our use of the Web. Privacy is gone. Over the past years, up to 1500 variables about half a billion citizens in the industrial world have been recorded. Google and Facebook know us better than our friends and families.

Nevertheless, governments have failed so far to gain control of terrorism, drug traffic, cybercrime and tax evasion. Would an omniscient state be able to change this and create a new social order?[2] It seems at least to be the dream of secret services and security agencies.   
Ira "Gus" Hunt, the CIA Chief Technology Officer, recently said:[3]

"You're already a walking sensor platform… You are aware of the fact that somebody can know where you are at all times because you carry a mobile device, even if that mobile device is turned off. You know this, I hope? Yes? Well, you should… Since you can't connect dots you don't have, it drives us into a mode of, we fundamentally try to collect everything and hang on to it forever… It is really very nearly within our grasp to be able to compute on all human generated information." 

Unfortunately, connecting the dots often does not work. As complex systems experts point out, such "linear thinking" can be totally misleading. It's the reason why we often want to do the right things, but take the wrong decisions.

I agree that our world has destabilized. However, this is not a result of external threats, but of system-immanent feedback effects. The increasing interdependencies, connectivity and complexity of our world and further trends are causing this.[4] However, trying to centrally control this complexity is destined to fail. We must rather learn to embrace the potential of complexity. This requires a step towards decentralized self-regulatory approaches. Many of us believe in Adam Smiths "invisible hand", according to which the best societal and economic outcome is reached, if everybody is just doing what is best for himself or herself. However, this principle is known to produce "market failures", "financial meltdowns", and other "tragedies of the commons" (such as environmental degradation) under certain circumstances. The classical approach is to try to "fix" these problems by top-down regulation of a powerful state.

However, self-regulation based on decentralized rules can be learned. This has been demonstrated for modern traffic control concepts, but it's equally relevant for smart grids, and will be even more important for the financial system. The latter, for example, needs built-in breaking points similar to the fuses in our electrical network at home, and it requires additional control parameters to equilibrate. 

There is an alternative to uncoordinated bottom-up organization and too much top-down regulation -- a better one: the "economy 2.0". Doing the step towards a self-regulating, participatory market society can unleash the unused potential of complexity and diversity, which we are currently trying to fight.[5] This step can boost our societies and economies as much as the transition from centrally regulated societies to market societies inspired by Adam Smith. But after 300 years, it's now time for a new paradigm. Societies based on surveillance and punishment are not long-term sustainable. When controlled, people get angry, and the economy never thrives. Qualified trust is a better basis of resilient societies. But how to build it? Reputation systems are now spreading all over the web. If properly designed, they could be the basis of a self-regulating societal and market architecture. Further success principles of decentralized self-regulating systems can be learned from ecological and immune systems. They can also be a basis for a trustable Web, which can successfully neutralize harmful actions and contain cybercrime.   

Rather than in surveillance technology, government should invest their money in the creation of self-regulating architectures. It will be crucial for a successful transition to a new era -- the era of information societies. If we take the right decisions, the 21st century can be an age of creativity, prosperity and participation. But if we take the wrong decisions, we will end in economic and democratic depression. It's our choice.



[2] The subject is discussed in my essay "Google as God?", see http://arxiv.org/abs/1304.3271
[4] D. Helbing, Globally Networked Risks and How to Respond, Nature 497, 51-59 (2013), see http://www.nature.com/nature/journal/v497/n7447/full/nature12047.html
[5] D. Helbing, Economics 2.0: The Natural Step towards a Self-Regulating, Participatory Market Society (2013), see http://arxiv.org/pdf/1305.4078v2.pdf

Thursday, 2 May 2013


Global Networks Must be Re-Designed

The increasing interdependencies between the world’s technological, socio-economic, and environmental systems have the potential to create global catastrophic risks. We may have to re-design many global networks, concludes Professor Dirk Helbing at ETH Zurich’s Risk Center in this week’s issue of Nature. Otherwise they could turn into “global time bombs”.


Living in a Hyper-Connected World

Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in man-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems.

Helbing’s publication illustrates how cascade effects and complex dynamics amplify the vulnerability of networked systems. For example, just a few long-distance connections can largely decrease our ability to mitigate the threats posed by global pandemics. Initially beneficial trends, such as globalization, increasing network densities, higher complexity, and an acceleration of institutional decision processes may ultimately push man-made or human-influenced systems towards systemic instability, Helbing finds. Systemic instability refers to a system, which will get out of control sooner or later, even if everybody involved is well skilled, highly motivated and behaving properly. Crowd disasters are shocking examples illustrating that many deaths may occur even when everybody tries hard not to hurt anyone.

Our Intuition of Systemic Risks Is Misleading

Networking system components that are well-behaved in separation may create counter-intuitive emergent system behaviors, which are not well-behaved at all. For example, cooperative behavior might unexpectedly break down as the connectivity of interaction partners grows. “Applying this to the global network of banks, this might actually have caused the financial meltdown in 2008,” believes Helbing.

Globally networked risks are difficult to identify, map and understand, since there are often no evident, unique cause-effect relationships. Failure rates may change depending on the random path taken by the system, with the consequence of increasings risks as cascade failures progress, thereby decreasing the capacity of the system to recover. “In certain cases, cascade effects might reach any size, and the damage might be practically unbounded,” says Helbing. “This is quite disturbing and hard to imagine.” All of these features make strongly coupled, complex systems difficult to predict and control, such that our attempts to manage them go astray.

“Take the financial system,” says Helbing. “The financial crisis hit regulators by surprise.” But back in 2003, the legendary investor Warren Buffet warned of mega-catastrophic risks created by large-scale investments into financial derivatives. It took 5 years until the “investment time bomb” exploded, causing losses of trillions of dollars to our economy. “The financial architecture is not properly designed,” concludes Helbing. “The system lacks breaking points, as we have them in our electrical system.” This allows local problems to spread globally, thereby reaching catastrophic dimensions.

A Global Ticking Time Bomb?

Have we unintentionally created a global time bomb? If so, what kinds of global catastrophic scenarios might humans face in complex societies? A collapse of the world economy or of our information and communication systems? Global pandemics? Unsustainable growth or environmental change? A global food or energy crisis? A cultural clash or global-scale conflict? Or will we face a combination of these contagious phenomena – a scenario that the World Economic Forum calls the “perfect storm”?

“While analyzing such global risks,” says Helbing, “one must bear in mind that the propagation speed of destructive cascade effects might be slow, but nevertheless hard to stop. It is time to recognize that crowd disasters, conflicts, revolutions, wars, and financial crises are the undesired result of operating socio-economic systems in the wrong parameter range, where systems are unstable.” In the past, these social problems seemed to be puzzling, unrelated, and almost “God-given” phenomena one had to live with. Nowadays, thanks to new complexity science models and large-scale data sets (“Big Data”), one can analyze and understand the underlying mechanisms, which let complex systems get out of control.

Disasters should not be considered “bad luck”. They are a result of inappropriate interactions and institutional settings, caused by humans. Even worse, they are often the consequence of a flawed understanding of counter-intuitive system behaviors. “For example, it is surprising that we didn’t have sufficient precautions against a financial crisis and well-elaborated contingency plans,” states Helbing. “Perhaps, this is because there should not be any bubbles and crashes according to the predominant theoretical paradigm of efficient markets.” Conventional thinking can cause fateful decisions and the repetition of previous mistakes. “In other words: While we want to do the right thing, we often do wrong things,” concludes Helbing. This obviously calls for a paradigm shift in our thinking. “For example, we may sanction deviations from social norms to promote social order, but may trigger conflict instead. Or we may increase security measures, but get more terrorism. Or we may try to promote innovation, but suffer economic decline, because innovation requires diversity more than homogenization.”

Global Networks Must Be Re-Designed

Helbing’s publication explores why today’s risk analysis falls short. “Predictability and controllability are design issues,” stresses Helbing. “And uncertainty, which means the impossibility to determine the likelihood and expected size of damage, is often man-made.” Many systems could be better managed with real-time data. These would allow one to avoid delayed response and to enhance the transparency, understanding, and adaptive control of systems. However, even all the data in the world cannot compensate for ill-designed systems such as the current financial system. Such systems will sooner or later get out of control, causing catastrophic man-made failure. Therefore, a re-design of such systems is urgently needed.

Helbing’s Nature paper on “Globally Networked Risks” also calls attention to strategies that make systems more resilient, i.e. able to recover from shocks. For example, setting up backup systems (e.g. a parallel financial system), limiting the system size and connectivity, building in breaking points to stop cascade effects, or reducing complexity may be used to improve resilience. In the case of financial systems, there is still much work to be done to fully incorporate these principles.

Contemporary information and communication technologies (ICT) are also far from being failure-proof. They are based on principles that are 30 or more years old and not designed for today’s use. The explosion of cyber risks is a logical consequence. This includes threats to individuals (such as privacy intrusion, identity theft, or manipulation through personalized information), to companies (such as cybercrime), and to societies (such as cyberwar or totalitarian control). To counter this, Helbing recommends an entirely new ICT architecture inspired by principles of decentralized self-organization as observed in immune systems, ecology, and social systems.

Coming Era of Social Innovation

Socio-inspired technologies built on decentralized mechanisms that create reputation, trust, norms or culture will be able to generate enormous value. “Facebook, based on the simple principle of social networking, is worth more than 50 billion dollars,” Helbing reminds us. “ICT systems are now becoming artificial social systems. Computers already perform the great majority of financial transactions, which humans carried out in the past.” But if we do not understand socially interactive systems well, coordination failures, breakdowns of cooperation, conflict, cyber-crime or cyber-war may result.

Therefore, a better understanding of the success principles of societies is urgently needed. “For example, when systems become too complex, they cannot be effectively managed top-down” explains Helbing. “Guided self-organization is a promising alternative to manage complex dynamical systems bottom-up, in a decentralized way.” The underlying idea is to exploit, rather than fight, the inherent tendency of complex systems to self-organize and thereby create a robust, ordered state. For this, it is important to have the right kinds of interactions, adaptive feedback mechanisms, and institutional settings, i.e. to establish proper “rules of the game”. The paper offers the example of an intriguing “self-control” principle, where traffic lights are controlled bottom-up by the vehicle flows rather than top-down by a traffic center.

Creating and Protecting Social Capital

It is important to recognize that many 21st century challenges such as the response to global warming, energy and food problems have a social component and cannot be solved by technology alone. The key to generating solutions is a Global Systems Science (GSS) that brings together crucial knowledge from the natural, engineering and social sciences. The goal of this new science is to gain an understanding of global systems and to make “systems science” relevant to global problems. In particular, this will require the combination of the Earth Systems Sciences with the study of behavioral aspects and social factors.

“One man’s disaster is another man’s opportunity. Therefore, many problems can only be successfully addressed with transparency, accountability, awareness, and collective responsibility,” underlines Helbing. “For example, social capital is important for economic value generation, social well-being and societal resilience, but it may be damaged or exploited, like our environment,” explains Helbing. “Humans must learn how to quantify and protect social capital. A warning example is the loss of trillions of dollars in the stock markets during the financial crisis.” This crisis was largely caused by a loss of trust.

“It is important to stress that risk insurances today do not consider damage to social capital,” Helbing continues. However, it is known that large-scale disasters have a disproportionate public impact, in part because they destroy social capital. As we neglect social capital in risk assessments, we are taking excessive risks.

New Instruments for the 21st Century

Finally, to gain the urgently needed insights, the study suggests to build new instruments, as proposed by the FuturICT initiative (http://www.futurict.eu): This comprises a “Planetary Nervous Systems” (PNS) to measure the state of our planet in real-time, capturing also socio-economic trends, social capital, and the “social footprint” of human decisions and actions. These data may be fed into a “Living Earth Simulator” (LES) to study “what … if” scenarios. A “policy wind tunnel” or “socio-economic flight simulator” of this kind could provide better, evidence-based advice for decision makers, be it politicians, business leaders, or citizens. It could help us to identify opportunities and alert us of risks or unwanted side effects. Last but not least, the “Global Participatory Platform” (GPP) would open up the above-mentioned tools for everyone and support collaboration, interactive exploration, and crowd sourcing.

This bold vision can be realized, provided that we learn how to design and operate open, value-oriented ICT systems and how to promote a non-malicious and self-determined use of data. It would take a major investment: an Apollo-like project focusing on techno-socio-economic-environmental systems, life on earth and everything it relates to. Helbing is convinced: “it would be the best investment humanity can make”.



See also


Supplementary Videos:
  • http://vimeo.com/53876434 : Spreading and erosion of cooperation in a social dilemma  situation
  • http://vimeo.com/53872893 : Cascade spreading is increasingly hard to recover from as failure progresses. The simulation model mimics spatial epidemic spreading with air traffic and healing costs.


For independent opinions, you may contact the following experts:



Spreading and erosion of cooperation in a prisoner’s dilemma game. The computer simulations assume the payoff parameters T = 7, R = 6, P = 2, and S = 1 and include success-driven migration. Although cooperation would be profitable to everyone, non-cooperators can achieve a higher payoff than cooperators, which may destabilize cooperation. The graph shows the fraction of cooperative agents, averaged over 100 simulations, as a function of the connection density (actual number of network links divided by the maximum number of links when all nodes are connected to all others). Initially, an increasing link density enhances cooperation, but as it passes a certain threshold, cooperation erodes. (See http://vimeo.com/53876434 for a related movie.) The computer simulations are based on a circular network with 100 nodes, each connected with the four nearest neighbours. n links are added randomly. 50 nodes are occupied by agents. Blue circles represent cooperation, red circles non-cooperative behaviour, and black dots empty sites. Initially, all agents are non-cooperative. Their network locations and behaviours (cooperation or defection) are updated in a random sequential way in 4 steps: (1) The agent plays two-person prisoner’s dilemma games with its direct neighbours in the network. (2) After the interaction, the agent moves with probability 0.5 up to 4 steps along existing links to the empty node that gives the highest payoff in a fictitious play step, assuming that no one changes the behaviour. (3) The agent imitates the behaviour of the neighbour who got the highest payoff in step 1 (if higher than the own one). (4) The behaviour is spontaneously changed with a mutation rate of 0.1. 


Cascade spreading is increasingly hard to recover from as failure progresses. The simulation model mimics spatial epidemic spreading with air traffic and healing costs in a two-dimensional 50 × 50 grid with periodic boundary conditions and random shortcut links. The colourful inset depicts an early snapshot of the simulation with N = 2500 nodes. Red nodes are infected, green nodes are healthy. Shortcut links are shown in blue. The connectivity-dependent graph shows the mean value and standard deviation of the fraction i(t)/N of infected nodes over 50 simulation runs. Most nodes have four direct neighbours, while a few of them possess an additional directed random connection to a distant node. The spontaneous infection rate is s = 0.001 per time step; the infection rate by an infected neighbouring node is P = 0.08. Newly infected nodes may infect others or may recover from the next time step onwards. Recovery occurs with a rate q = 0.4, if there is enough budget b > c to bear the healing costs c = 80. The budget needed for recovery is created by the number of healthy nodes h(t). Hence, if r(t) nodes are recovering at time t, the budget changes according to b(t + 1) = b(t) + h(t- cr(t). As soon as the budget is used up, the infection spreads explosively. (See also the movie at http://vimeo.com/53872893 )


Monday, 8 April 2013

How And Why Our Conventional Economic Thinking Causes Global Crises - Dirk Helbing


How And Why Our Conventional Economic Thinking Causes Global Crises

by Dirk Helbing

I believe it's no wonder that our world is in trouble. We currently lack the global systems science needed to understand our world, which is now changing more quickly than we can collect the experience required to cope with upcoming problems. We can also not trust our intuition, since the complex systems we have created behave often in surprising, counter-intuitive ways. Frequently, their properties are not determined by their components, but their interactions. Therefore, a strongly coupled world behaves fundamentally different from a weakly coupled world with independent decision-makers. Strong interactions tend to make the system uncontrollable – they create cascading effects and extreme events.

As a consequence of the transition to a more and more strongly coupled world, we need to revisit the underlying assumptions of the currently prevailing economic thinking. In the following, I will discuss 10 widespread assertions, which would work in a perfect economic world with representative agents and uncorrelated decisions, where heterogeneity, decision errors, and time scales do not matter. However, they are apparently not well enough suited to depict the strongly interdependent, diverse, and quickly changing world, we are facing, and this has important implications. Therefore, we need to 'think out of the box' and require a paradigm shift towards a new economic thinking characterized by a systemic, interaction-oriented perspective inspired by knowledge about complex, ecological, and social systems. As Albert Einstein noted, long-standing problems are rarely solved within the dominating paradigm. However, a new perspective on old problems may enable new mitigation strategies.

1. More networking is good and reduces risks
Many human-made systems and services are based on networking. While some degree of networking is apparently good, too much connectivity may also create systemic risks and pathways for cascading effects. These may cause extreme events and global crises like the current financial crisis.

Moreover, in social dilemma situations (where unfair behavior or cheating creates individual benefits), too much networking creates a breakdown of cooperation and trust, while local or regional interactions may promote cooperation. The transformation of the financial system into a global village, where any agent can interact with any other agent, may actually have been the root cause of our current financial crisis.

Countermeasures: Limit the degree of networking to a healthy amount and/or introduce adaptive decoupling strategies to stop cascading effects and enable graceful degradation (including slow-down mechanisms in crisis situations). Support the evolution and co-existence of several weakly coupled financial systems (to reduce systemic vulnerability, stimulate competition between systems, and create backup solutions). Reduce the complexity of financial products and improve the transparency of financial interdependencies and over-the-counter transactions by creating suitable information platforms.

2. The economy drives towards an equilibrium state
Current economic thinking is based on the assumption that the economic system is in equilibrium or at least tends to develop towards a state of equilibrium. However, today's world changes faster than many companies and policies can adapt. Therefore, the world economic system is unlikely to be in equilibrium at any point in time. It is rather expected to show a complex non-equilibrium dynamics.

Therefore, a new economic thinking inspired by complex dynamical systems, ecosystems, and social systems would be beneficial. Such a perspective would also have implications for the robustness of economic systems. Overall, beneficial properties seem to be: redundancy, variety, sparseness, decoupling (separated communities, niches), and mutually adjusted time scales (which are required for hierarchical structures to function well).

Countermeasures: Invest into new economic systems thinking. Combine the axiomatic, mathematical approach of economics with a 'natural science approach' based on data and experiments. Develop non-equilibrium network models capturing the self-organized dynamics of real economic systems. Pursue an interdisciplinary approach, taking on board complex, ecological and social systems thinking. Develop better concepts for systemic risk assessment, systems design, and integrated risk management.

3. Individuals and companies decide rationally
The 'homo economicus' is a widely used paradigm in economics. It is the basis of a large and beautiful body of mathematical proofs on idealized economic systems. However, the paradigm of a strictly optimizing, perfect egoist is a model, which is questioned by theoretical and empirical results.

Theoretically, the paradigm assumes unrealistic information storage and processing capacities (everyone would need to have a full 1:1 representation of the entire world in the own brain and an instant data processing of huge amounts of data, including the anticipation of future decisions of others). Empirically, one finds that people behave in a more cooperative and fair way than the paradigm of the 'homo economicus' predicts. In particular, the paradigm neglects the role of errors, emotions, other-regarding preferences, etc. This implies significant deviations of real human behaviors from theoretically predicted ones.

Countermeasure: Use a combination of interactive behavioral experiments, agent-based modeling, data mining and social supercomputing to study (aspects of) real(istic) economic systems.

4. Selfish behavior optimizes the systemic performance and benefits everyone
Another pillar of conventional economic thinking is Adam Smith's principle of the 'invisible hand', according to which selfish profit maximization would automatically lead to the best systemic outcome based on self-organization. It is the basis of the ideology of 'free markets', according to which regulation would tend to reduce the performance of economic systems.

However, models in evolutionary game theory show that self-organized coordination in markets can easily fail, even when market participants have equal power, symmetrical information etc. Moreover, even if the individually optimal behavior also maximizes system performance and if everybody behaves very close to optimal, this may still create a systemic failure (e.g. when the system optimum is unstable). Therefore, it is highly questionable whether the systemic inefficiencies resulting from individual optimization efforts can always be compensated for by greedy motivations (such as trying to get more than before or more than others).

Countermeasures: Measure the system state in real-time and respond to this information adaptively in a way that promotes coordination and cooperation with the interaction partners. Create an information and communication system supporting collective (self-)awareness of the impact of human actions on our world. Increase opportunities for social, economic and political participation.

5. Financial markets are efficient
One implication of Adam Smith's principle of the 'invisible hand' is the efficiency of financial markets, according to which any opportunity to make money with a probability higher than chance would immediately be used, thereby eliminating such opportunities and any related market inefficiencies.

Efficient markets should not create bubbles and crashes, and therefore one would not need contingency plans for financial crises (they could simply not occur). Financial markets would rather be in equilibrium as the conventional Dynamic Stochastic General Equilibrium Models suggest. However, many people believe that bubbles and crashes do occur. The flash crash of May 6, 2010, is a good example of a market irregularity, which has repeatedly occurred in the meantime. Also, many financial traders do not seem to believe in efficient markets, but rather in the existence of opportunities that can be used to make over-proportional profits.

Countermeasures: Develop contingency plans for financial crises. Adjust the financial architecture and identify suitable strategies (such as breaking points) to stop cascading effects in the financial system. Introduce noise into financial markets by random trading transactions to destroy bubbles before they reach a critical size that may have a disastrous systemic impact.

6. More information and financial innovations are good
One common view is that market inefficiencies result from an unequal distribution of power, which partially results from information asymmetries (“knowledge is power”). Therefore, providing more information to everyone should remove the related inefficiencies.

However, too much information creates a cognitive information overload. As a result, people tend to orient at other people's behaviors and information sources they trust. As a consequence, people do not anymore take independent decisions, which can undermine the 'wisdom of crowds' and market efficiency. One example is the large and unhealthy impact that the assessments of a few rating agencies have on the global markets.

It is also believed that financial innovations will make markets more efficient by making markets more complete. However, it has been shown that complete markets are unstable. In fact, leverage effects, 'naked' short-selling (of assets one does not own), credit default swaps, high-frequency trading and other financial instruments may have a destabilizing effect on financial markets.

Countermeasures: Identify and pursue decentralized, pluralistic, participatory information platforms, which support the 'wisdom of crowds' effect. Test financial instruments (such as derivatives) for systemic impacts (e.g. by suitable experiments and computer simulations) and certify them before they are released, as this is common in other economic sectors (special safety regulations apply, for example, in the electrical, automobile, pharmacy and food sectors).

7. More liquidity is better
Another wide-spread measure to cure economic crises are cheap loans provided by central banks. While this is intended to keep the economy running and to promote investments in the real economy, most of this money seems to go into financial speculation, since business and investment banks are not sufficiently separated.

This can cause bubbles in the financial and real estate markets, where much of these cheap loans are invested. However, the high returns in the resulting 'bull markets' are not sustainable, since they depend on the continued availability of cheap loans. Sooner or later, the created bubbles will implode and the financial market will crash (the likelihood of which goes up when the interest rates are increased). This again forces central banks to reduce interest rates to a minimum in order to keep the economy going and promote investments and growth. In other words, too much liquidity is as much of a problem, as is too little.

Countermeasure: Separate investment from business banking and introduce suitable adaptive transaction fees.

8. All agents can be treated as if acting in the same way
The 'representative agent approach' is another important concept of conventional economic thinking. Assuming that everyone would behave optimally, as the paradigm of the 'homo economicus' predicts, in equivalent situations everybody should behave the same. This allows one to replace the interaction of an economic agent with other agents by interactions with average agents, in particularly if one assumes that everyone has access to the same information and participates in perfect markets.

However, the representative agent model cannot describe cascade effects well. These are not determined by the average stability, but by the weakest link. The 'representative agent approach' also neglects effects of spatial interactions and heterogeneities in the preferences of market participants. When these are considered, the conclusions can be completely different, sometimes even opposite (e.g. there may be an 'outbreak' rather than a breakdown of cooperative behavior).

Finally, the representative agent approach does not allow one to understand particular effects of the interaction network structure, which may promote or obstruct cooperativeness, trust, public safety, etc. Neglecting such network effects can lead to a serious underestimation of the importance of 'social capital' for the creation of economic value and social well-being.

Countermeasures: Protect economic and social diversity. Allow for the existence of niche markets and for the consideration of justified local advantages. Avoid competition on one single dimension (e.g. economic value generation) and promote multi-criterion incentive systems. Develop better compasses for decision-making than GDP per capita, taking into account environmental, health, and social factors. Make social capital (such as cooperativeness, trust, public safety, …) measurable.

9. Regulation can fix the imperfections of economic systems
When the self-organization of markets does not work perfectly, one often tries to 'fix the problem' by regulation. However, complex systems cannot be steered 'like a bus', and many control attempts fail. In many cases, the information required to regulate a complex system is not available, and even if one would have a surveillance system that monitors all variables of the system, one would frequently not know what the relevant control parameters are. Besides, suitable regulatory instruments are often lacking.

A more promising way to manage complexity is to facilitate or guide favorable self-organization. This is often possible by modifying the interactions between the system components. It basically requires one to establish targeted real-time information feedbacks, suitable 'rules of the game', and sanctioning mechanisms. To stay consistent with the approach of self-organization, sanctioning should as far as possible be done in a decentralized, self-regulatory way (as it is characteristic for social norms or the immune systems).

Countermeasures: Pursue a cybernetic and synergetic approach, promoting favorable self-organization by small changes in the interactions between the system elements, i.e. by fixing suitable 'rules of the game' to avoid instabilities and suboptimal systemic states. (Symmetry, fairness, and balance may be such principles.) Introduce a global but decentralized and manipulation-resistant multi-criterion rating system, community-specific reputation system, and pluralistic recommender system encouraging rule-compatible behavior.

10. Moral behavior is always costly
Species that do not strictly optimize their benefits are often assumed to disappear eventually due to the principles of natural selection implied by the theory of evolution. As a consequence, a 'homo economicus' should remain, and moral decision-making, which constrains oneself to a subset of available options, would vanish.

This certainly applies, if one forces everybody to interact with everybody else on equal footing, as the concept of perfect, free markets demands. In fact, evolutionary game-theoretical models show that these are conditions under which a 'tragedy of the commons' tends to occur, and where cooperation, fairness and trust tend to erode. On the other hand, social systems have found mechanisms to avoid the erosion of social capital. These mechanisms include repeated interactions, reputation effects, community interactions, group competition, sanctioning of improper behavior etc. In particular, decentralized market interactions seem to support fairness.

Countermeasures: Promote value-sensitive designs of monetary systems and information and communication systems. For example, introduce two co-existing, interacting, competitive exchange systems: one for anonymous (trans)actions (as we largely have them today) and one for accountable, traceable (trans)actions (creating 'social' money or information). Additionally, introduce suitable transaction costs to create incentives for accountable, responsible (trans)actions and to promote ethical behavior.

Summary
In conclusion, we are now living in a strongly coupled and strongly interdependent world, which poses new challenges. While it is probably unrealistic to go back beyond the level of networking and globalization we have reached, there is a great potential to develop new management approaches for our complex world based on suitable interaction rules and adaptive concepts, using real-time measurements.

It must be underlined that our current financial and economic problems cannot be solved within the current economic mainstream paradigm(s). We need to change our perspective on the financial and economic system and pursue new policies. The following recommendations are made:
  1. Adjust the perspective of our world to the fundamentally changed properties of the globalized, strongly interdependent techno-socio-economic-environmental system we have created and its resulting complex, emergent dynamic system behavior.
  2. Make large-scale investments into new economic thinking, particularly multi-disciplinary research involving knowledge from sociology, ecology, complexity science, and cybernetics.
  3. Support diversity in the system, responsible innovation, and multi-dimensional competition.
  4. Recognize the benefits of local and regional interactions for the creation of social capital such as cooperativeness, fairness, trust, etc.
  5. Require an advance testing of financial instruments and innovations for systemic impacts and restrict destabilizing instruments.
  6. Identify and establish a suitable institutional framework for interactions (suitable 'rules of the game') in order to promote a favorable self-organization.
  7. Implement better, value-sensitive incentive systems to foster more responsible action.
  8. Establish a universal, global reputation system to promote fair behavior and allow ethical behavior to survive in a competitive world.
  9. Create new compasses for political decision-making, considering environment, health, social capital, and social well-being.
  10. Develop new tools to facilitate the assessment of likely consequences of our decisions and actions (the 'social footprint'). These tools may, for example, include:

  • a 'Planetary Nervous System' to enable collective awareness of the state of our world and society in real-time,
  • a 'Living Earth Simulator' to explore side effects and opportunities of human decisions and actions,
  • a 'Global Participatory Platform' to create opportunities for social, economic and political participation,exchange systems that support value-oriented interactions.

The socio-economic system envisaged here is characterized by the following features. It is:
  • based on individual decisions and self-organization,
  • using suitable incentives to support sustainability and to avoid coordination failures, tragedies of the commons, or systemic instabilities,
  • recognizing heterogeneity and diversity as factors promoting happiness, innovation, and systemic resilience.
In a previous blog post “Networked Minds” Require A Fundamentally New Kind of Economics, we discussed how natural selection can create both, self-and other-regarding preferences.


Other scientific references supporting the above arguments are provided on request.