Showing posts with label terrorism. Show all posts
Showing posts with label terrorism. Show all posts

Saturday, 7 February 2015

NEW SECURITY APPROACHES FOR THE 21ST CENTURY- How to support crowd security and responsibility

by Dirk Helbing

-Food for thought, to trigger debate


How can we protect companies and people from violence and exploitation? How can we open up information systems for everyone without promoting an explosion of malicious activities such as cyber-crime? And how can we support the compliance with rule sets on which self-regulating systems are built? 




These challenges are addressed by Social Information Technology based on the concept of crowd security. A self-regulating system of moderators and the use of reputation systems are part of the concept. Today’s reputation systems, however, are not good enough. It is essential to allow for multiple quality criteria and diverse recommendations, which are user-controlled. This leads to the concept of “social filtering” as a basis of a self-regulating information ecosystem, which promotes diversity and innovation. 


Better awareness can help to keep us from engaging in detrimental, unfair or unsustainable interactions. However, we also need mechanisms and tools to protect us from violence, destruction and exploitation. Therefore, can we build Social Information Technologies for protection? And how would they look like? The aim of such Social Information Technologies would be to avoid such negative interactions, organize (collective) support or get fairly compensated. Of course, we also need to address here the issues of cyber-security and of the world's peace-keeping approach. Let us start here with the latter.

The "Balance of Threat" can be unstable


Like many, I have was raised in a period of cold war. Military threats were serious and real, but the third world war did not happen. This is generally considered to be a success of the “Balance of Threat” (or “Balance of Terror”): if one side were to attack the other, there would still be time to launch enough intercontinental nuclear warheads to eradicate the attacker. Given the "nuclear overkill" and assuming that no side would be crazy enough to risk elimination, nobody would start such a war. 

However, what if this calculus is fundamentally flawed? There were quite a number of instances within a 60 years period, where the world came dauntingly close to a third world war. The Cuban missile crisis is just the most well-known, but there were others that most of us did not hear about. (see World War III and  Risks of nuclear accidents is rising). Perhaps, we have survived the tragedy of nuclear deterrence by sheer chance?

The alarming misconception is that only shifts in relative power can destabilize a “Balance of Threat”. This falsely assumes that balanced situations, called equilibria, are inherently stable, which is actually often not the case. To illustrate, remember the simple experiment of a circular vehicle flow discussed earlier (see video): although it is apparently not difficult to drive a car at constant speed together with other cars, the equilibrium traffic flow will break down sooner or later. If only the density on the traffic circle is higher than a certain value, a so-called "phantom traffic jam" will form without any particular reason – no accident, no obstacles, nothing. The lesson here is that dynamical systems starting in equilibrium can easily get out of control even if everyone has good information, the latest technology and best intentions.

What if this is similarly true for the balance of threat? What if, this equilibrium is unstable? Then, it could suddenly and unexpectedly break down. I would contend that a global-scale war may start for two fundamentally different reasons. Consider a simple analogue from physics in which a metal plate is pushed from two opposite sides. In the first situation, if either of the two sides holding the plate becomes stronger than the other, the metal plate will move. Hence, the spheres of influence will shift. The second possibility is that both sides are pushing equally strong, but they are pushing so much that the metal plate suddenly bends and eventually breaks. 

Often when an international conflict emerges, an action from one side triggers a counter-action from the opposing side. One sanction is met by something else and vice versa. In this escalating chain of events, everyone is pushing harder and harder without any chance for either side to gain the upper hand. In physics example, the metal plate may bend or break. In practical terms, the nerves of a political leader or army general, for example, may not be infinitely strong. Furthermore, not all events are under their control. Thus, under enormous pressure, things might keep escalating and may suddenly get out of control, even if nobody wants this to happen, if everyone just wants to save face. And this is still the most optimistic scenario, one in which all actors act rationally, for which there is no guarantee, however. 

In recent years evidence has accumulated to demonstrate that in human history many wars have occurred due to either of the instabilities discussed above. The FuturICT blog on the Complexity Time Bomb described how war can result without aggressive intentions on either side. Furthermore, recent books have revealed that World War I resulted from an eventual loss of control - the outcome of a long chain of events – a domino effect that probably resulted from the second kind of instability. Moreover, conflict in the Middle East has lasted for many decades, and it taught us one thing: Winning every battle does not necessarily win a war (quoted in the movie “The Gatekeepers” by a former secret service chief). Similar lessons had to be learned from the wars in Afghanistan and Iraq. Therefore, a new kind of thinking about security is needed. 

Limits of the sanctioning approach


Whilst sanctioning might in some cases create social order, it can also cause instability and escalation in others. In the conflict in the Middle East, punishment is unsuccessful - the punishee does not accept the punishment, because values and culture are different. In such cases, the punishment is considered to be an undue assault and aggression, and therefore a strong enough punishee will strike back to maintain his/her own values and culture. In this manner, a cycle of escalation ensues, where both sides further drive the escalation, each fuelled by their conviction they they are doing the right thing. In such a situation, deterrence is clearly not an effective solution. In other words, it is not useful to organize security alliances among countries which share the same values, as this creates precisely the cultural blocks that are unable to exercise acceptable sanctioning measures and will therefore run into escalating conflicts that can result in wars. Instead we need a new, symmetrical security architecture that is suited for a multi-polar world able to deal with cultural diversity. What we need are new strategies and a new kind of thinking. We also need a suitable approach in face of newly emerging cyber-threats.

How to manage a multi-polar world?


In the past, we have had a world with a few superpowers and blocs of countries forming alliances with them. Whenever one of these countries would be under attack, they would be under the protection of the others belonging to the same bloc. After World War II, the United States of America and Russia were the only superpowers remaining. With the breakdown of the Warsaw pact, there remained just one superpower. China is now the strongest economic power in the world and with Russia's comeback to world politics through the conflicts in Syria and the Ukraine, we are now living in a multi-polar world. Such a world is not well controllable anymore, as the "Three-Body Problem" suggests. This problem originally refers to the interaction of 3 celestial bodies, for which chaotic dynamics may result despite the simple conservation laws of mechanics. So, how much more unpredictable would a multi-polar world be? 

It becomes increasingly obvious that today no power (political or business) in the world is strong enough to play the role of a world police, and that we need a new security architecture. If this would be an architecture for the entire world, it would need to have a number of features: The classical security alliances (power blocks) would have to be overcome. In view of globalization, thinking from the perspective of nation states seems to make decreasing sense. Furthermore, the concept of a "Balance of Threat" would have to be replaced by a "Network of Trust." The concept would have to be symmetric and not based on exclusive rights or veto power. It would have to be based on a set of shared values, and whoever violates them would feel the joint response of all the other countries in the world, independently of who their classical alliances were. For this approach to work well, mutual trust would have to grow, which would require more transparency and less secrecy. 

In the emerging digital society, how much secrecy is still essential? I cannot give a definitive answer to this, but I do believe that secrecy in the right time, place and context may have some benefits (e.g. privacy). But how much opacity should public institutions acting behalf of their citizens be allowed to have? And for what time period? Will the concept of secrecy be feasible at all in the future? Certainly Wikileaks and the Snowden revelations raise the question of whether secrets can still be kept in a data-rich world. Moreover, secret services have often been accused of engaging in unlawful behaviour, which they claim is necessary, to get an inside view of the closed circles of terrorism and organized crime. However, it has been stressed by some that such a strategy may actually promote terrorism and crime, and undermine the legitimacy of secret services, or even the states or powers they are serving. Finally, the effectiveness of secret services has often been questioned, and also whether they do more good than harm.

What alternatives might we have to create a new security architecture? In this context, it is relevant to consider that more than 95 percent of the knowledge of secret services derives from public sources. As ever more activities in the world now leave a digital shadow and become traceable in real time, couldn't the largest part of public security be produced by public services rather than secret services? This does not necessarily mean to close down secret services, but to open up more information for wider circles. For example, why shouldn't specially qualified and authorized teams at public universities develop the algorithms and do the data mining to identify suspicious activities? Thanks to their higher transparency, they are exposed to scientific criticism and public scrutiny and would therefore be able to deliver higher-quality results. Given the many mistakes one can make when mining data, this would probably reduce the risk of wrong conclusions and other undesirable side effects. I am convinced that a step towards more transparency could largely increase the perceived legitimacy of the security apparatus and also the trust of people in the activities of their governments and states. 

Perhaps, some readers of this book will find the above proposal to build public security on public efforts absurd, but it's not. In many countries, the police have already started to involve the citizens in their search for criminals such as through public webpages displaying pictures of suspects, as well as using text messages and social media. "Crowd security" is just the next logical step. In fact, we might put this into a bigger picture. As we know, the Internet started off with ARPANET, a military communication network. Opening it up for civilian use eventually enabled the creation of the World Wide Web, which then triggered off entirely new kinds of business and the digital economy. With the invention and ubiquity of Social Media, a large proportion of us has become part of a world-spanning network. The volume and dynamics of the related digital economy has become so extensive that the military and secret services can often not keep up with it anymore and, hence, they are increasingly buying themselves into civilian business solutions. This clearly shows that a future concept to protect our society and its citizens must largely build on the power of the civic society. 

Crowd security rather than super powers


Let me give an example of a system, in which crowd security is surprisingly effective and efficient, and where it creates "civic resilience". In the late nineties, I spent some time as a visiting scientist at Tel Aviv University with Isaac Goldhirsch. At that time I read in the tourist guide that the average age of people in the country was 32, so I was prepared for the worst. But I found myself enjoying my stay in the Middle East immensely. Despite the daily threats, people seemed to have a positive attitude towards life. 

One of the things that impressed me much was the way security at public beaches was achieved, all based on unwritten rules. Everyone knew that any bag at the beach might contain a bomb that could kill you. Bags with nobody around were considered to be particularly suspicious. But at a beach, there are always some people swimming, so unminded baggage is normal. In this situation, people solve the problem by forming an invisible security network. Upon joining the beach, everyone becomes part of this informal network and implicitly takes responsibility for what is going on. That is, everyone scans the neighbourhood for suspicious activities. Who has newly arrived at the beach? What kind of people are they? How do they behave? Do they know others? Where do they go, when leaving their baggage alone etc.? In this way, it is almost impossible to leave a bag containing a bomb without arousing the suspicions of other people. To the best of my knowledge, there were relatively few bomb explosions at the beaches.

I would like to term the above distributed security activity as "crowd security". We have recently learned about the benefits of "crowd intelligence," "crowd sourcing," and "crowd funding," so why not "crowd security"? In fact, the way societies establish and maintain social norms is very much based on a "peer punishment" of those who violate these norms. From raising eyebrows to criticizing others, or showing solidarity with someone who is being attacked, there is a lot one can do to support a fair coexistence of people. I recall that, during one of our summer schools on Lipari Island in Italy, one of our US speakers noted: "In my country, you cannot even distribute some flyers in a private mall without security stepping in, but nevertheless, there are shootings all the time. I am surprised that everything is so peaceful in the public space on this island: young people next to old ones, Italians next to all sorts of foreigners, and I have not even seen a single policeman all these days." Again, people seem to be able to sort things out in a constructive way. 

How then can we generalize this within an international context? I have sometimes wondered if having less power might work better than having more. When having little power, you must be sensitive to what happens in your environment, and this will help you to adapt (thereby allowing self-regulation to work). However, if you have a lot of power, you wouldn’t make a sufficient effort to find a solution that satisfies as many people as possible. You would rather prioritize your own interests and force everybody else to adapt. But this would not create a system-optimal solution. As the example of cake-cutting suggests, the outcome wouldn't be fair, and therefore not sustainable on the long run. Why this? Because if you were too powerful, you would not get honest answers anymore, and sooner or later you would make really big mistakes that take a long time to recover from. For good reasons, Switzerland does not have a leader. The role of the presidency is taken for a short time period and rotates. This is interesting, as it requires everyone to find a sustainable balance of interests that is supported by many and, hence, has higher legitimacy. But there are more arguments than this for a decentralized, bottom-up "crowd security" approach.

The immune system as prime example


One of the most astonishing complex systems in the world is our immune system. Even though we are bombarded every day by thousands of viruses, bacteria, and other harmful agents, our immune system is pretty good in protecting us for usually 5 to 10 decades. This is probably more effective than any other protection system we know. And there is another even more surprising fact: in contrast to our central nervous system, the immune system is "decentrally organized". It is a well known fact that decentralized systems tend to be more resilient. In particular, while targeted attacks or point failures can shut down a centralized system, a decentralized system will usually survive the impact of attacks and recover. This is one reason for the robustness of the Internet -- and also the success of Guerrilla defence strategies (whether we like this or not). 

Turning enemies into friends


There is actually a further surprise: a major part of our healthy immune response is based on our digestive tract, which contains up to a million billions of bacteria -- 10 times more than our body has cells. These bacteria are not only important to make the contents of our food accessible to our body, while they split them up into ingredients to find food for themselves. The rich zoo of about a thousand different bacteria in us even forms an entire ecosystem, which is fighting dangerous intruding bacteria that do not match the needs of our body. Bacteria that were once our enemies have eventually been turned into our allies through a symbiotic relationship that has eventually emerged through an evolutionary process. My friend and colleague, Dirk Brockmann recently pointed out to me to the really amazing level of cooperation, which is the basis of all developed life and now studied in the field of hologenomics. In fact, humans as well came up with tricky mechanisms encouraging cooperation. These are often based on exchange, such as trade, and a system of mutual incentive mechanisms, which promote coordination and cooperation. Social Information Technologies are intended to support this. 

So why don't we build our societal protection system and the future Internet in a way that is inspired by our biological immune system? It appears that societies as well have something like a basic immune system. The peer-to-peer sanctioning of deviations from social norms is one example for this, which I already mentioned before. We now witness internet vigilantes or lynch mobs on the web, criticizing things that people find improper or distasteful. I acknowledge that lynch mobbing can be problematic and may violate human rights; this will require us to find a suitable framework. It seems that we are seeing here the early stage of the evolution of a new, social immune system. Rather than censoring or turning off social media as in some countries, we should develop them further to make them compatible with our laws and cultural values. Then systems like these could provide useful feedback that would help our societies and economy to provide better conditions, products and services.

The question is how do we best obtain a high level of security in a self-regulating economy and society? In perspective, we might create a security system that is partly based on automated routines and partly on crowd intelligence. If I can illustrate this again with the example of the Internet: let's assume that servers which are part of the Internet architecture, would autonomously analyze the data traffic for suspicious properties, but -- in contrast to what we are seeing today -- we would not run centralized data collection and data analytics. (Our brain certainly does not record and evaluate everything that happens in our immune system, including the digestive tract, but our body is nevertheless protected pretty well.) In case of detected suspicious activities, a number of responses are conceivable, for example: (1) the execution of the activity could be put on hold, while the sender is asked for feedback, (2) the event could trigger an alert to the sender or receiver of the data, a local administrator, or to a public forum, whatever seems appropriate. The published information could be screened by a crowd-based approach, to determine possible risks (particularly systemic risks) and to take proper action. While actions of type (1) would be performed automatically by computers, algorithms, or bots, actions of type (2) would correspond to the complementary crowd security approach. In fact, there would be several levels of self-regulation by the crowd, as I describe later. One may also imagine a closer meshing of computational and human-based procedures, which would mutually enhance each other.


Managing the chat room


We have seen that information exchange and communication on the web has quickly evolved. In the beginning, there was no regulation or self-regulation in place at all. These were the times of the Wild Wild Web, and people often did not respect human dignity or the rights of companies. But police and other executive authorities were also experimenting with new and controversial Internet-based instruments, such as Internet pillories to publicly name people. 

All in all, however, one can see a gradual development of improved mechanisms and instruments. For example, public comments in news forums were initially published without moderation, but this spread a lot of low-quality content. Then, comments were increasingly assessed for their lawfulness (e.g. for respecting human dignity) before they went on the web. Then, it became possible to comment on comments. Now, comments are rated by the readers, and good ones get pushed to the top. The next logical step would be to rate commentators and raters. We can see thus the evolution of a self-regulatory system that channels the free expression of speech into increasingly constructive paths. I believe it is possible to reach a responsible use of the Internet based on principles of self-regulation. Eventually, most malicious behaviour will be managed by automated and crowd-based mechanisms such as the reporting of inappropriate content and reputation-based placements. A small fraction will have to be taken care of by a moderator, such as a chat room master and there will be a hierarchy of complaint instances to handle the remaining, complicated cases. I expect that, in the end, only a few cases will remain to be decided at the court, while most activities will be self-governed by social feedback loops in terms of sanctions and rewards by peers.

The above mechanisms will also feed-back from the virtual to the real world, and we will see an evolution of our over-regulated, inefficient, expensive and slow legal system into one that is largely self-regulating, more effective and more efficient. Here we may learn from the way interactive multi-player online games or Interactive Virtual Worlds are managed, particularly those populated by children. One of my colleagues, Seth Frey, has pointed me to one such example, the Penguin Club. To keep bad influences away from children, communication and actions within the Penguin Club world are monitored by administrators. As the entire population of Penguin Club users is too large to be mastered by a single person, there are several communities run on several servers, i.e. the Penguin Club world is distributed. Moreover, as every administrator manages his or her community autonomously, these may be viewed as parallel virtual worlds. This provides us with an exceptional opportunity to compare different ways of governance. Our study is far from being completed, so I just want to mention this much: It turns out that, if vandalism is automatically sanctioned by a robotic computer program, this tends to suppress creativity and results a boring world. This is reminiscent of the many failed past attempts to create well-functioning, liveable cities managed in a top-down way.

Returning to the virtual world of Penguin Club, I certainly don't want to argue in favour of vandalism, but I want to point out the following: the most creative and innovative ideas are, by their very nature incompatible with established rules, and it requires human judgement to determine, whether they should be accepted or sanctioned. This has an interesting implication: we may actually allow for different rules to be implemented in different communities, as they may find different things to be acceptable or not. This will eventually lead to diverse Interactive Virtual Worlds, which gives people an opportunity to personally choose their fitting world(s). 

Embedding in our current institutional system


Of course, we need to make sure to stay within the limits of the constitution and fundamental laws, such as human rights and respect for human dignity. Such decision may require difficult moral judgements and require particular qualifications of the "judge," the administrator of the gaming community or chat room. So it does make sense to have a hierarchy of such "judges" based on their qualification to decide difficult matters in an acceptable and respected way. These arbiters would be called "community moderators". 

How would a "hierarchy of competence" emerge among such community moderators? This would be based on previous merits, i.e. on qualifications, contributions, and performance. Decisions would be rated both from the lower and the upper level. Over sufficiently many decisions, this would determine who will be promoted -- always for a limited amount of time -- and who will not. If the punished individual accepts the sentence of the arbiter, the moderation procedure is finished, and the sentence is published. Otherwise, the procedure continues on the next higher level, which is supposed to spend more effort on finding a judgement compatible with previous traditions, to reach a reasonable level of continuity and predictability. 

Whoever asks for a judgement process (or revision) would have to come up for the costs (depending on the system, this might also be virtual money, such as credit points). Judgements on higher levels would become more expensive, and for the sake of fairness, fees and fines will not correspond to a certain absolute amount of money, but to a certain percentage of the earnings made in the past, for example, in the last 3 years. For example, in Switzerland, such a percentage-based system is successfully applied to traffic fines. 

Only when the above-described self-regulation fails to resolve a conflict of interest over all judgement instances of the Interactive Virtual World would today's central authorities need to step in. One might even think that many of today's legal cases could be handled in the above crowd-based way of conflict resolution, and that today's judges would then only form the highest hierarchy. This would fit the system of self-regulation proposed above into our current organization of society. I expect the resulting procedures to be effective and efficient. The long duration of many court cases could be dramatically cut down. In other words, new community-based institutions of self-regulation should be able to help resolve the large majority of conflicts of interest better than existing institutions. I see the role of courts, police, and military mainly to help restore a balance of interests and power, when other means have failed. In this connection, it is important to remember that control attempts in complex systems often fail and tend to damage the functionality of the system rather than fixing it in a sustainable way. Therefore, I don't think that these institutions should try to control what happens in society. 

Ending over-regulation


I believe that over time the principles of self-regulation will replace today's over-regulated system. A hundred years ago, only a handful of laws were made in the United Kingdom in one year. Now, a new regulation is put into practice every few hours. In this way, we have arrived in a system with literally tens of thousands of regulations. Even though we are supposed to, nobody can know all of them (but ignorance does not excuse us). Moreover, many laws are often revised shortly after their first implementation. 

Even lawyers don't know all laws and regulations by heart. If you ask them, whether one thing is right or the opposite, they will usually answer: "it depends." So, we are confronted with a system of partially inconsistent over-regulation, which puts most people into a situation, where they effectively violate laws several times a year -- and they even don't know in advance how a court would judge the situation. This creates an awkward arbitrary element in our legal system. While some people get prosecuted, others get away, and this creates an unfair system, not just because some can afford to have better lawyers than others.

However, this is not the only way an unfair situation is created, while our law system intends just the opposite i.e to ensure a system that doesn't generate advantages for some individuals, companies, or groups. So what is the problem? Whenever a new law or regulation is applied, it requires some people or companies to adapt a lot, while others have to adapt just a little. This creates advantages for some and disadvantages for others. Powerful stakeholders would make sure a new law will fit their needs, such that they must adapt only a little, while their competitors would have to adapt much. Hence, the new law will make them again more powerful. However, even if we had no lobbying to reach law-making tailored to particular interest groups, the outcome would be similar. Just the stakeholders who profit most would vary more over time. The reason is simple: If N regulations are made and p is the probability that you have to adapt little, while (1-p) is the chance that you have to adapt a lot, the probability that you are a beneficiary k times is pk(1-p)(N-k). In other words, there is automatically a very small percentage of stakeholders who benefits from regulations enormously, while the great majority is considerably disadvantaged relative to them. Putting it differently: The homogenization of the socio-economic world comes along with a serious problem: the more rules we apply to everyone, the fewer people will find this world not well fit to their needs. And this explains a lot of the frustration among citizens and companies, not just in the European Union. 

Only a highly diverse system with many niches governed by their own sets of rules allows everyone to thrive. Interestingly, this is exactly how nature works. It is the existence of numerous niches that allows many species to survive, and new ones to come up. For similar reasons, socio-economic diversity is an important precondition for innovation, which is important for economic prosperity and social well-being. Nature is much less governed by rules than today's service societies. For example, recent discoveries of "epigenetics" revealed that not even the genetic code is always read in the same way, but that its transcription largely depends on the biological and social environment. 

Thus, how to build socio-economic niches, in which people can self-organize according to their own rules, within the boundaries of our constitution? Can we find mechanisms that promote social order, but allow different communities to co-exist, each one governed by their own sets of values and quality criteria? Yes, I believe, this is possible. Social Information Technologies will help people and companies to master the increasing levels of diversity in a mutually beneficial way. Furthermore, reputation systems can promote cooperation. If they are multi-dimensional, pluralistic, and community-driven, they can offer a powerful framework for social self-regulation, which provides enough space for diversity and opportunities for everyone. 

Pluralistic, community-driven reputation systems


Here I want to elaborate a bit more on another important component of the "social immune system", namely reputation systems. These days, reputation and recommender systems are spreading over the Web which stresses their value and function. People can rate products, news, and comments, and they do! If they make the effort, there must be a reason for it. In fact, Amazon, Ebay, Tripadvisor and many other platforms offer other recommendations in exchange. Such recommendations are beneficial not only for users, who tend to get a better service, but also for companies, since a higher reputation allows them to sell a product or service at a higher price. However, it is not good enough to leave it to a company to decide, what recommendations we get and how we see the world. This would promote manipulation and undermine the "wisdom of the crowd" leading to bad outcomes. It is, therefore, important that recommender systems do not reduce socio-diversity. In other words, we should be able to look at the world from our own perspective, based on our own values and quality criteria. Only then, when these different perspectives come together, can collective intelligence emerge. 

As a consequence, reputation systems would have to become much more user-controlled and pluralistic. Therefore, when users post ratings or comments on products, companies, news, pieces of information, and information sources (including people), it should be possible to assess not just the overall quality, but also different quality dimensions such as the physical, chemical, biological, environmental, economic, technological, and social qualities. Such dimensions may include popularity, durability, sustainability, social factors, or how controversial something is. It is, then, possible to identify communities based on shared tastes (and social relationships). 

We know that people care about different things. Some may love slapstick comedies, while others detest them. So, it's important to consider the respective relevant reference group, and this might even change depending on the respective role we take, e.g. at work, at home, or in a circle of friends. To take this into account, each person should be able to have diverse profiles, which we may call "personas". For example, book recommendations would have to be different, if we look for a book for ourselves, for our family members, of for our friends. 

Creating a trend to the better


Overall, the challenge of creating a universal, pluralistic reputation system may be imagined as having to transfer the principles, on which social order in a village is based, to the global village, i.e. to conditions of a globalized world. The underlying success principle is a merit-based matching of people making similar efforts. This can prevent the erosion of cooperation based on "indirect reciprocity," as scientists would say. For this approach to play out well, there are a number of things to consider: (1) the reputation system must be resistant to manipulation attempts; (2) people should not be terrorized by rumours; (3) to allow for more individual exploration and innovation than in a village, one would like to have the advantages of the greater freedoms of city life -- this requires sufficient options for anonymity (to an extent that cannot challenge systemic stability).

First, to respect the right of informational self-determination, a person would be able to decide what kind of personal information (social, economic, health, intimate, or other kind of information) it makes accessible for what purpose, for what period of time, and to what circle (such as everyone, non-profit organizations, commercial companies, friends, family members, or just particular individuals). These settings would, then, allow selected others to access and decrypt selected personal information. Of course, one might also decide not to reveal any personal information at all. However, I expect that having a reputation for something will be better for most people than having none, if it would help find people who have similar preferences and tastes.

Second, people should be able to post their comments or ratings either in an anonymous, pseudonymous, or personally identifiable way. But pseudonymous posts would have, for example, a 10 times higher weight than anonymous ones, and personal ones a 10 times higher weight than pseudonymous ones. Moreover, everyone who posts something would have to declare the category of information: is it a fact (potentially falsifiable and linked to evidence allowing to check it), an advertisement (if there is a personal benefit for posting it), or an opinion (any other information). Ratings would always have the category "opinion" or "advertisement". If people use the wrong category or post false information, as identified and reported by, say, 10 others, the weight of their ratings (their "influence") would be reduced by a factor of 10 (of course, these values may be adjusted). All other ratings of the same person or pseudonym would be reduced by a factor of 2. This mechanism ensures that manipulation or cheating does not pay off. 

Third, users would be able to choose among many different reputation filters and recommender algorithms. Just imagine, we could set up the filters ourselves, share them with our friends and colleagues, modify them, and rate them. For example, we could have filters recommending us the latest news, the most controversial stories, the news that our friends are interested in, or a surprise filter. So, we could choose among a set of filters that we find most useful. Considering credibility and relevance, the filters would also put a stronger weight on information sources we trust (e.g. the opinions of friends or family members), and neglect information sources we do not want to rely on (e.g. anonymous ratings). For this, users would rate information sources as well, i.e. other raters. Then, spammers would quickly lose their reputation and, with this, their influence on recommendations made.

Users may not only use information filters (such as the ones generating personalized recommendations), but they will also be able to generate, share, and modify them. I would like to term this approach “social filtering.” (A simple system of this case has been implemented in Virtual Journal). 

Together, the system of personal information filters would establish an "information ecosystem," in which increasingly reliable filters will evolve by modification and selection, thereby steadily enhancing our ability to find meaningful information. Then, the pluralistic reputation values of companies and their products (e.g. insurance contracts or loan schemes) will give a quite differentiated picture, which can also help the companies to develop customized and more useful/successful products. Reputation systems are therefore advantageous for both, customers and producers. Customers will get better offers, and producers can take a higher price for better quality, leading to mutual benefit.

Summary


Social Information Technologies for protection might be imagined to work like a kind of immune system, i.e. a decentralized system that responds to changes in our environment and checks out the compatibility with our own values and interests. If negative externalities are to be expected (i.e. if the value of an interaction would be negative), a protective "immune response" would be triggered. 

Part of this would be an alarm system, a kind of "radar" that alerts a user of impending dangers and makes him or her aware of them. In fact, the "Internet of Things" will make changes – both gains and losses -- measurable, including psychological impacts such as stress, or social impacts, such as changes in reputation or power. Social Information Technologies for protection would help people to solidarize themselves against others who attack or exploit them. A similar protection mechanism may be set up for institutions, or even countries. Such social protection ("crowd security") might often be more efficient and effective than long-lasting and complicated lawsuits. Of course, protection by legal institutions would exist, but lawsuits would become more like a last resort than a first resort, for when social protection fails, e.g. when there is a need to protect someone from organized crime. Note that already a suitably designed reputation system would be expected to be quite efficient in discouraging certain kinds of exploitation or aggression, as it would discourage others from interacting with such people or companies, which would decrease the further success of those who trouble others.

Friday, 10 October 2014

COMPLEXITY TIME BOMB: When systems get out of control


 by Dirk Helbing

                                                                                                                                                                                Photo: RenateWernli

This is second in  a  series of blog posts that form chapters of my forthcoming book Digital Society. Last week's chapter was titled:  GENIE OUT OF THE BOTTLE: The digital revolution on its way.

Financial crises, terrorism, conflict, crime: it turns out, the conventional ‘medicines’ to tackle global problems are often inefficient or even counter-productive. The reason for this is surprisingly simple: we approach these problems with an outdated understanding of our world. While the world might still look similar to how it has looked for a long time, I will argue that it has, in fact, inconspicuously but fundamentally changed over time.

We are used to the idea that societies must be protected from external threats such as earthquakes, volcanic eruptions, hurricanes, and military attacks by enemies. However, we are increasingly threatened by another kind of problems: those that come from within the system, such as financial instabilities, economic crises, social and political unrest, organized crime and cybercrime, environmental change, and spreading diseases. These threats have become some of our greatest worries. According to the World Economic Forum's Risk Map, the largest risks today are of a socio-economic nature such as inequality or governance failure. These global 21st century problems cannot be solved with 20th century wisdom, because they are of a different scale and result from a new level of complexity in today's socio-economic systems. We must therefore better understand what complex systems are, and what are their properties. To this end, I will discuss the main reasons why things go wrong: unstable dynamics, cascading failures in networks, and systemic interdependencies. I will illustrate these problems by examples such as traffic jams, crowd disasters, blackouts, financial crises, crime, wars, and revolutions.

Phantom traffic jams


Complex systems include phenomena ranging from turbulent flows and the global weather system to decision-making, opinion formation in groups, financial and economic markets, and the evolution and spread of languages. But we must take care to distinguish complex systems from complicated ones. A car is complicated: it consists of thousands of parts, yet is easy to control (when it works properly). Traffic flow, on the other hand, which depends on the interactions of many cars, is a complex dynamical system, which produces counter-intuitive, individually uncontrollable behaviors such as "phantom traffic jams" that seem to have no cause. While many traffic jams do occur for a specific, identifiable reason, such as an accident or a building site, everyone has also encountered situations where a vehicle queue appeared "out of nothing" – and where there is no visible cause - see  visualisation

To explore the true reasons for these phantom traffic jams, Yuki Sugiyama from the Nagoya University in Japan and his colleagues carried out an experiment, in which they asked many people to drive their cars around a circular track - see visualisation  The task sounds simple, and indeed all vehicles moved smoothly for some time. But then a random perturbation in the traffic flow, an unexpected slow-down of a car, triggered the appearance of “stop-and-go” traffic – a traffic jam that travelled backwards around the track, against the driving direction.

While we often blame others for poor driving skills to explain such "phantom traffic jams," studies in complexity science have shown that they rather emerge as a collective phenomenon unavoidably resulting from the interactions between vehicles. A detailed analysis shows that, if the density of cars exceeds a certain "critical" threshold – that is, if their average separation is smaller than a certain value – then the smallest perturbation in the speed of any car will be amplified to cause a breakdown of the entire flow. Because drivers need some time to respond to such a disturbance, the next driver in line will have to brake harder to avoid an accident. Then the following driver will have to break even harder, and so on. This chain reaction amplifies the small initial perturbation and eventually produces the jam – which of course every individual would prefer to avoid.

Recessions - traffic jams in the world economy?


Economic supply chains might exhibit a similar kind of behavior. As known from John Sterman's "beer distribution game," supply chains are also hard to control. Even experienced managers will often end up ordering too much beer, or will run out of it. This is a situation that is as difficult to avoid as stop-and-go traffic. In fact, our scientific work suggests that economic recessions may be regarded as a kind of traffic jam in the global supply network (see figure below). This is actually somewhat heartening news, since it implies that, just as with traffic flow, engineered solutions may exist that can mitigate economic recessions, provided that we have access to real-time data on the world's supplies and materials flows. Such solutions will be discussed later in the chapter on Socio-Inspired Technologies.

Instability and self-organization in strongly interacting systems


A shocking example for systemic instabilities discussed later is the occurrence of crowd disasters. Here, even when everyone is peacefully minded and tries to avoid harming others, many people might die. What do all these examples tell us? Our experience will often not inform us well, and our intuition may fail, since complex dynamical systems tend to behave in unexpected or even counter-intuitive ways. Such systems are typically made up from many interacting components, which respond to the behavior of other system components. As a consequence of these interactions, complex dynamical systems tend to self-organize, i.e. to develop a collective behavior that is different from what the components would do in separation. Then, the components’ individual properties are often no longer characteristic for the system. "Chaotic" or "turbulent" dynamics are possible outcomes, but complex systems can show many other phenomena.

When self-organization occurs, one often speaks of emergent phenomena that are characterized by new system properties, which cannot be understood from the properties of the single components. For example, the facts that water feels wet, extinguishes fires, and freezes at a particular temperature are properties, which cannot be understood from the properties of single water molecules.

As a consequence of the above, we have to shift our attention from the components of our world to their interactions. In other words, we need a change from a component-oriented to an interaction-oriented, systemic view, which is at the heart of complexity science. I claim that this change in perspective, once it becomes common wisdom, will be of similar importance as the transition from the geocentric to the heliocentric worldview. The related paradigm shift has fundamental implications for the way in which complex techno-socio-economic systems must be managed and, hence, also for politics and our economy. Focusing on the interactions in a system and the multi-level emergent dynamics resulting from them, opens up fundamentally new solutions to long-standing problems.

Instability is one possible behavior of complex dynamical systems, which results when the characteristic system parameters cross certain critical thresholds. If a system behaves unstable, i.e. perturbations are amplified, a random, small deviation from the normal system state may trigger a domino effect that cannot be stopped, even if people have the best intentions to do so and have enough information, good technology, and proper training. In such situations of systemic instability, the system will inevitably get out of control sooner or later, no matter how hard we try to avoid this. As a consequence, we need to know the conditions under which systems will behave in an unstable way, in order to avoid such conditions. In many cases, too strong interactions are a recipe for disaster or other undesirable outcomes.

Group dynamics and mass psychology may be seen as typical examples of collective dynamics. People have often wondered what makes a crowd turn "mad", violent, or cruel. After the London riots in the year 2011, people asked how it was possible that teachers and daughters of millionaires – people one would not expect to be criminals – were participating in the lootings. Did they become criminal minds when their demonstrations against police violence suddenly turned into riots? Possibly, but not necessarily so. In the above traffic flow example, people wanted to do one thing: drive continuously at reasonably high speed, but a phantom traffic jam occurred instead. We found that, while individual cars are well controllable, the traffic flow – a result of the interactions of many cars – is often not. The take home message may be formulated as follows: complex systems cannot be steered like a car. Even if everyone has the latest technology, is well-informed and well-trained, and has the best intentions, an unstable complex system will sooner or later get out of control.

Therefore, while our intuition works well for weakly coupled systems, in which the system properties can be understood as sum of the component properties, complex dynamical systems behave often in counter-intuitive, hardly predictable ways. Frequently, the collective, macro-level outcome in a complex system can't be understood from and controlled by the system components. (Such system components might also be individuals or companies, for example.)

Beware of strongly coupled systems


Thus, what tends to be different in strongly coupled systems as compared to weakly interacting ones? First, the dynamics of strongly connected systems with positive feedbacks is often faster. Second, self-organization and strong correlations tend to dominate the dynamics of the system. Third, the system behavior is often counter-intuitive – unwanted feedback or side effects are common. Conventional wisdom tends to fail. In particular, extreme events occur more often than expected, and they may impact the entire system. Furthermore, the system behavior can be hard to predict, and planning for the future may not be useful. Opportunities for external control are also typically quite limited, as the system-immanent interactions tend to dominate. Finally, the loss of predictability and control may lead to an erosion of trust in private and public institutions, which in turn can create social, political, or economic instabilities.

In spite of all this, many people still have a component-oriented and individual-centric view, which can be quite misleading. We praise heroes when things run well and search for scapegoats when something goes wrong. But the discussion above has shown how difficult it is for individuals to control the outcome of a complex dynamical system, if its components' interactions are strong. This fact may be illustrated by the example of politics. Why do politicians, besides managers, have among the worst reputations among all professions? This is probably because we vote them to make politics according to the positions they publicly voice, but then we often find them doing something else. This, again, is a consequence of the fact that politicians are exposed to many strong interactions due to lobbyists and pressure groups with various points of view. Each one is trying to push the politician in a different direction. In many cases, this will force the politician to take a decision that is not compatible with his or her own points of view, which is hard for the voters to accept. Managers of companies find themselves in similar situations. But not only they: think of the decision-dynamics in many families. If it were easy to control, we would not see so many divorces...

Crime is another good example for unwanted outcomes of complex dynamics, even though a controversial one. We must ask ourselves: Are we interested in sustaining social order, or are we interested in filling prisons? If we decide for the first option, we must confront ourselves with the question: Should we really see all crime as deeds of criminal minds, as we often do? Or should we pay more attention to the circumstances that happen to cause crime? In cases, where individuals plan crimes such as the theft of a famous diamond, the conventional picture of crime is certainly appropriate. But do these cases give a representative picture?

Classically, it is assumed that crimes are committed, if the expected advantage is larger than the punishment, multiplied with the probability of being convicted. Therefore, raising punishments and discovery rates should theoretically eliminate all crime. Such punishment would make crime a lossful experience and, therefore, "unattractive." However, empirical evidence questions this simple picture. On the one hand, people usually don't pick pockets, even though they could often get away without a punishment. On the other hand, deterrence strategies are surprisingly ineffective in most countries, and high crime rates are often recurrent. For example, even though the USA have 10 times more prisoners than most European countries, rates of various crimes, including homicides, are still much higher. So, what is wrong with our common understanding of crime?

Surprisingly, many crimes, including murders, are committed by average people, not by people with criminal careers. A closer inspection shows that many crimes result from situations, over which the involved individuals lose their control. Frequently, group dynamics plays an important role, and many scientific studies indicate that the socio-economic context is a strong determining factor of crime. Therefore, in order to counter crime, it might be more effective to change these socio-economic conditions rather than sending more people to jail. I am saying this also with an eye on the price we have to pay for this: A single prisoner costs more than the salary of a postdoctoral researcher with a PhD degree, some even more than a professor!

Cascade effects in complex networks


Making things worse, complex systems may show further problems besides dynamic instabilities based on amplification effects. Thanks to globalization and technological progress, we have now a global exchange of people, goods, money, and information. Worldwide trade, air traffic, the Internet, mobile phones, and social media have made everything much more comfortable – and connected. This has created many new opportunities, but everything now depends on a lot more things. What are the implications of this increased interdependency? Today, a single tweet can send stock markets to hell. A youtube movie can trigger a riot that kills dozens of people. Our decisions can have impacts on the other side of the globe more easily than ever – and sometimes unintentionally so. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, and can seriously affect global health, social welfare, and economic systems.

By networking our world, have we inadvertently built highways for disaster spreading? In 2011 alone, three major cascading failures occurred, which are changing the face of the world and the global balance of power: The financial crisis, the Arab spring and the combined earthquake, tsunami and nuclear disaster in Japan. In the following, I will discuss some examples of cascade effects in more detail.

Large-scale power blackouts


On November 4, 2006, a power line was temporarily turned off in Ems, Germany, to facilitate the transfer of a Norwegian ship. Within minutes, this caused a blackout in many regions all over Europe – from Germany to Portugal! Nobody expected this. Before the line was switched off, of course, a computer simulation was performed to verify that the power grid would still operate well. But the scenario analysis did not check for the coincidence of a spontaneous failure of another line. In the end, a local overload of the grid caused emergency switch-offs in the neighborhood, creating a cascade effect with pretty astonishing outcomes: some blackouts occurred in regions thousands of kilometers away, while other areas in the neighborhood were not affected at all. Is it possible to understand this strange behavior?

Indeed, a computer-based simulation study of the European power grid recently managed to reproduce such effects. It demonstrated that the failure of a few network nodes in Spain could create a surprising blackout in Eastern Europe, several thousand kilometers away, while the electricity network in Spain would still work - see visualisation

Furthermore, increasing the capacities of certain parts of the power grid would unexpectedly make things worse. The cascading failure would be even bigger! Therefore, weak elements in the system have an important function: they act as circuit breakers, thereby interrupting the failure cascade. This is an important fact to remember.

Bankruptcy cascades


The sudden financial meltdown in 2008 is another example, which hit many companies and people by surprise. In a presidential address to the American Economic Association in 2003, Robert Lucas said:
"[The] central problem of depression-prevention has been solved."
Similarly, Ben Barnenke, as chairman of the Federal Reserve Board, long believed that the economy was well understood, and doing well. In September 2007, Ric Mishkin, a professor at Columbia Business School and then a member of the Board of Governors of the US Federal Reserve System, made a statement reflecting widespread beliefs at this time:
"Fortunately, the overall financial system appears to be in good health, and the U.S. banking system is well positioned to withstand stressful market conditions."

As we all know, things came very different. A banking crisis occurred only shortly later. It started locally, with the bursting of a real estate bubble in the West of the USA. Because of this locality, most people thought this problem was easy to contain. But the mortgage crises had spill-over effects to the stock markets, where certain financial derivatives could not be sold anymore (now called "toxic assets"). Eventually, more than 400 banks all over the United States went bankrupt. How could this happen? The video presents an impressive visualisation of the bankruptcies of banks in the USA after Lehman Brothers collapsed. Apparently, one bank's default triggered further ones, and these triggered even more. In the end, hundreds of billion dollars were lost.

The above video reminds of another video which I often use to illustrate cascade effects: It shows an experiment with many table tennis balls placed on top of mouse traps. The experiment demonstrates impressively that a single local perturbation can mess up the entire system. It illustrates chain reactions, which are the basis of atomic bombs or of nuclear fission reactors. As we know, such cascade effects are technologically controllable in principle, if we stay below the critical interaction strength (sometimes called the "critical mass"). Nevertheless, these processes can sometimes get out of control, mostly in unexpected ways. The nuclear disasters in Chernobyl or in Fukushima are well-known examples for this. So, we must be extremely careful with systems showing cascade effects.

The financial crisis


As we know, the above-mentioned cascading failure of banks was just the beginning of an even bigger crisis. It subsequently caused an economic crisis and a public spending crisis in major areas of the world. Eventually, the events even threatened the stability of the Euro currency and the European Union. The crisis brought several countries (including Greece, Ireland, Portugal, Spain, Italy and the US) at the verge of bankruptcy. As a consequence, many countries have seen historical heights in unemployment rates. In some countries, more than 50 percent of young people do not have a job. In many regions, this has caused social unrests, political extremism and increased crime and violence. Unfortunately, it seems that the cascade effect has not been stopped yet. There is a long way to go until we fully recover from the financial crisis and from the public and private debts accumulated in the past years. If we can't overcome this problem soon, it has even the potential to endanger peace, democratic principles and cultural values, as I pointed out in a letter to George Soros in 2010. Looking at the situation in Ukraine, we are perhaps seeing this scenario already.

While all of this is now plausible from hindsight, the lack of advance understanding by conventional wisdom becomes clear by the following quote from November 2010, going back to the former president of the European Central Bank, Jean-Claude Trichet:

"When the crisis came, the serious limitations of existing economic and financial models immediately became apparent. Arbitrage broke down in many market segments, as markets froze and market participants were gripped by panic. Macro models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner. As a policy-maker during the crisis, I found the available models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools." Similarly, Ben Bernanke summarized in May 2010: “The brief market plunge was just an example of how complex and chaotic, in a formal sense, these systems have become… What happened in the stock market is just a little example of how things can cascade, or how technology can interact with market panic.”

Leading scientists as well had problems making sense of the crisis. In a letter dated 22 July 2009 to the Queen of England, the British Academy came to the conclusion:

"When Your Majesty visited the London School of Economics last November, you quite rightly asked: why had nobody noticed that the credit crunch was on its way? ... So where was the problem? Everyone seemed to be doing their own job properly on its own merit. And according to standard measures of success, they were often doing it well. The failure was to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction. ... Individual risks may rightly have been viewed as small, but the risk to the system as a whole was vast. ... So in summary ... the failure to foresee the timing, extent and severity of the crisis … was principally the failure of the collective imagination of many bright people to understand the risks to the systems as a whole."

Thus, nobody was responsible for the financial mess? I don't want to judge, but we should remember that it's often not possible to point the finger at the exact person who caused a phantom traffic jam. Therefore, given that these are collectively produced outcomes, do we have to accept collective responsibility for them? And how to enumerate everyone's share of responsibility? This is certainly an important question worth thinking about.

It is also interesting to ask, whether complexity science could have forecasted the financial crisis? In fact, before the crash, I followed the stock markets pretty closely, as I noticed strong price fluctuations, which I interpreted as "critical fluctuations," i.e. an advance warning signal of an impending financial crash. Therefore, I sold my stocks in the business launch of an airport in 2007, while waiting for the departure of my airplane. In spring 2008, about half a year before the collapse of Lehman brothers, I wrote an article together with Markus Christen and James Breiding, taking a complexity science perspective on the financial system. We came to the conclusion that the financial system was in a process of destabilization. Pretty much as Andrew Haldane, Chief Economist and Executive Director at the Bank of England, formulated it later, we believed that the increased level of complexity in the financial system was a major problem. It made the financial system more vulnerable to cascade effects than most experts thought. In spring 2008, we were so worried about this that we felt we had to alert the public, but none of the newspapers we contacted was ready to publish our essay at that time. "Too complicated for our readers" was the response, while we replied "if you cannot make this understandable to your readers, then there is nothing that can prevent the financial crisis." And so the financial crisis came! Six month after the crisis, a manager of McKinsey in the United Kingdom commented on our analysis that it was the best he had ever seen.

But there were much more prominent people who saw the financial crisis coming. For example, legendary investor Warren Buffet warned of mega-catastrophic risks created by large-scale investments into financial derivatives. Back in 2002 he wrote:

"Many people argue that derivatives reduce systemic problems, in that participants who can't bear certain risks are able to transfer them to stronger hands. These people believe that derivatives act to stabilize the economy, facilitate trade, and eliminate bumps for individual participants. On a micro level, what they say is often true. I believe, however, that the macro picture is dangerous and getting more so. ... The derivatives genie is now well out of the bottle, and these instruments will almost certainly multiply in variety and number until some event makes their toxicity clear. Central banks and governments have so far found no effective way to control, or even monitor, the risks posed by these contracts. In my view, derivatives are financial weapons of mass destruction, carrying dangers that, while now latent, are potentially lethal."
As we know, it still took five years until the "investment time bomb" exploded, causing losses of trillions of dollars to our economy.

Fundamental uncertainty


In liquid financial markets and many other hardly predictable systems such as the weather, we can still determine the probability of events, at least approximately. Thus, we make a probabilistic forecast similar to: "there is a 5 percent chance to lose more than half of my money when selling my stocks in 6 months, but a 70 percent chance that I will make a good profit, etc." It is then possible to determine the expected loss (or gain) implied by the likely actions and events. For this purpose, the damage or gain of each possible event is multiplied with its probability, and the numbers are added up to give the expected damage or gain. In principle, one could do this for all actions we might take, in order to determine the one that minimizes the damage or maximizes the gain. The only problem involved in this exercise seems to be the practical determination of the probabilities and of the likely damages or gains involved. With the increasing availability of data, this problem might, in fact, be attacked, but it will remain difficult or impossible to determine the probabilities of "extreme events," as the empirical basis for rare events is too small.

It turns out, however, that there are problems where the expected damage in large (global) systems cannot be determined at all for principal reasons. Such "fundamental" or "radical" uncertainty can occur in case of cascade effects, where one failure is likely to trigger other failures, and where the damage related to subsequent events times their likelihood is increasing. In such cases, the sum of losses may be unbounded, in principle, i.e. it may not be possible anymore to enumerate the expected loss. In practice, this means that the actual damage can be small, big, or practically unbounded, where the latter might lead to the collapse of the entire system.

Explosive pandemic outbreaks


The threat by cascade effects might be even worse if the damage occurring in an early phase of the cascade process reduces the probability of resisting failures that are triggered later. A health system, in which financial or medical resources are limited, may be considered as an example for this. How will this system deal with emergent diseases? A computer-based study that I performed together with Lucas Böttcher, Nuno Araujo, Olivia Woolley Meza and Hans Hermann shows that the outcome very much depends on the connectivity between people who may infect each other. A few additional airline connections can make the difference between a case, where the disease will be contained, and where it turns into a devastating global pandemics. The problem is that crossing a certain connectivity threshold will change the system dynamics dramatically and unexpectedly. Thus, have we built global networks that behave in unpredictable and uncontrollable ways?

Systemic interdependencies


Recently, Shlomo Havlin and others made a further important discovery: they revealed that networks of networks can be particularly vulnerable to failures. A typical example is the interdependency between electrical and communication networks. Another example, which illustrates the global interdependencies between natural, energy, climate, financial, and political systems is the following: In 2011, the Tohoku earthquake in Japan caused a tsunami that triggered chain reactions and nuclear disasters in several reactors at Fukushima. Soon after this, Germany and Switzerland decided to exit nuclear power generation over the next decade(s). However, alternative energy scenarios turn out to be problematic as well. European gas deliveries depend on some regions, which we cannot fully rely on. Likewise, Europe’s DESERTEC project, a planned 1000 billion Euro investment into infrastructure to supply solar energy for Europe – has an uncertain future due to another unexpected event, the Arab Spring. This was triggered by high food prices, which were no longer affordable to many people. These high food prices, in turn, resulted partly from biofuel production, which intended to improve the global CO2 balance, but competed with food production. The increasing food prices were further amplified by financial speculation. Hence, the energy system, the political system, the social system, the food system, the financial system – they have all become closely interdependent systems, which makes our world ever more vulnerable to perturbations.

Have humans unintentionally created a "complexity time bomb"?


We have seen that, when systems are too much connected, they might get out of control sooner or later, despite advanced knowledge and technology, and best intentions to keep things under control. Therefore, as we have created more and more links and interdependencies in the world, we must ask ourselves: have humans inadvertently produced a "complexity time bomb", a system that will ultimately get out of control?

For a long time, problems such as crowd disasters and financial crashes have been seen as puzzling, ‘God-given’ phenomena or "black swans" one had to live with. However, problems like these should not be considered “bad luck.” They are often the consequence of a flawed understanding of counter-intuitive system behaviors. While conventional thinking can cause fateful decisions and the repetition of previous mistakes, complexity science allows us to understand the mechanisms that cause complex systems to get out of control. Amplification effects can result and promote failure cascades, when the interactions of system components become stronger than the frictional effects or when the damaging impact of impaired system components on other components occurs faster than the recovery to their normal state. That is, time scales of processes largely determine the controllability of a system as well. Delayed adaptation processes are often responsible for systemic instabilities and losses of control (see the related Information Box at the end).

For certain kinds of networks, the similarity of related cascade effects with those of chain reactions in nuclear fission is quite disturbing. Such processes are difficult to control. Catastrophic damage is a realistic scenario. Therefore, given the similarity of the cascading mechanisms, is it possible that our worldwide anthropogenic system will get out of control sooner or later? When analyzing this possibility, one must bear in mind that the speed of destructive cascade effects might be slow, and the process may not appear like an explosion. Nevertheless, the process may be hard to stop and lead to an ultimate systemic failure. For example, the dynamics underlying crowd disasters is slow, but deadly. So, what kinds of global catastrophic scenarios might we face in complex societies? A collapse of the global information and communication systems or of the world economy? Global pandemics? Unsustainable growth, demographic or environmental change? A global food or energy crisis? A cultural clash? Another global-scale war? A societal shift, driven by technological innovations? Or, more likely, a combination of several of these contagious phenomena? The World Economic Forum calls this the "perfect storm," and the OECD has formulated similar concerns.

Unintended wars and revolutions


Last but not least, it is important to realize that large-scale conflicts, revolutions, and wars can also be unintended results of systemic instabilities and interdependencies. Interpreting them as deeds of historical figures personalizes these phenomena in a way that distracts from their true, systemic nature. It is important to recognize that complex systems such as our economy or societies usually resist attempts to change them at large, namely when they are close to a stable equilibrium. This is also known as Goodhart's law (1975), principle of Le Chatelier (1850-1936), or as "illusion of control." Individual factors and randomness can only have a large impact on the path taken by the complex system, when the system is driven to a tipping point (also known as "critical point"). In other words, instability is a precondition for individuals to have a historical impact. For example, the historical sciences increasingly recognize that World War I was pretty much an unintended, emergent outcome of a chain reaction of events. Moreover, World War II was preceded by a financial crisis and recession, which destabilized the German economic, social, and political system. This finally made it possible that an individual could become influential enough to drive the world to the edge.

Unfortunately, civilization is vulnerable, and a large-scale war may happen again – I would say, it is even likely. A typical unintended path towards war looks as follows: The resource situation deteriorates, for example, because of a serious economic crisis. The resulting fierce competition for limited resources lets competition, violence, crime, and corruption rise, while solidarity and tolerance go down, so that the society is fragmented into groups. This causes conflict, further dissatisfaction and social turmoil. People get frustrated about the system, calling for leadership and order. Political extremism emerges, scapegoats are searched, and minorities are put under pressure. As a consequence, socio-economic diversity is lost, which further reduces the economic success of the system. Eventually, the well-balanced "socio-economic ecosystem" collapses, such that the resource situation (the apparent "carrying capacity") deteriorates. This destabilizes the system further, such that an external enemy is "needed" to re-stabilize the country. Finally, nationalism rises, and war may seem to be the only "solution" to keep the country together.

Note that a revolution, too, can be the result of systemic instability. Hence, it does not need to be initiated by an individual, "revolutionary" leader, who challenges an established political system. The breakdown of the former German Democratic Republic (GDR) and some Arab spring revolutions (for example, in Libya) have shown that revolutions may start even without the existence of a clearly identifiable political opponent leading the revolution. On the one hand, this is the reason, why such revolutions cannot be stopped by targeting a few individuals and sending them to jail. On the other hand, the absence of revolutionary leaders has puzzled secret services around the world – the Arabic spring took them by surprise. It was also irritating for sympathetic countries, which could not easily provide support for democratic civil movements. Whom should they have talked or given money to?

It provides a better picture to imagine such revolutions as a result of situations, in which the interest of government representatives and the people (or the interests of different societal groups) have drifted away from each other. Similar to tensions created by the drift of the Earth's tectonic plates, this would sooner or later lead to an unstable situation and an "earthquake-like" stress release (the "revolution"), resulting in a re-balancing of forces. Again, it is a systemic instability, which allows individuals or small groups to become influential eventually, while the conventional picture suggests that the instability of a political regime is caused by a revolutionary leader. Putting it differently, a revolution isn't usually the result of the new political leaders, but of the politics that was made before, which destabilized the system. So, we should ask ourselves, how well are our societies doing in terms of balancing the different interests in our societies, and in terms of adapting to a quickly changing world, due to demographic change, environmental change, technological change?

Conclusion


It is obvious that there are many problems ahead of us. Most of them result from the complexity of the systems humans have created. But how can we master all these problems? Is it a lost battle against complexity? Or do we have to pursue a new, entirely different strategy? Do we perhaps even need to change our way of thinking? And how can we generate the innovations needed, before it's too late? The next chapters will let you know...


Information Box: How harmless behavior can turn critical

In the traffic flow example and for the case of crowd disasters, we have seen that a system can get out of control when the interaction strength (e.g. the density) is too large.

How a change in density can turn harmless behavior of system components uncontrollable, is illustrated by the following example: Together with Roman Mani, Lucas Böttcher, and Hans J. Herrmann, I studied collisions in a system of equally sized particles moving in one dimension, similar to Newton's Cradle see video. We assumed that the particles tended to oscillate elastically around equally spaced equilibrium points, while being exposed to random forces generated by the environment. If the distance between the equilibrium points of neighboring particles was large enough, each particle oscillated around its equilibrium point with normally distributed speeds, and all particles had the same small variance in speeds.

However, as the separation of equilibrium points approached the particle diameter, we found a cascade-like transmission of momentum between particles see video. Surprisingly, towards the boundary particles, the variance of speeds was rapidly increasing. In energy-conserving systems, the speed variance of the outer particles would even tend towards infinity with increasing system size. Due to cascading particle interactions, this makes their speeds unpredictable and uncontrollable, even though every particle follows a simple and harmless dynamics.

Information Box: Loss of Synchronization

There is another puzzling kind of systemic instability that is highly relevant for our societies, as many socio-economic processes accelerate. It occurs when the separation of time scales gets lost. For example, hierarchical systems in physics and biology are characterized by the fact that adjustment processes on higher hierarchical levels are typically much slower than on lower hierarchical levels. Therefore, lower level variables adjust quickly to the constraints set by the higher level ones, and that is why the higher levels basically control the lower ones. For example, groups tend to take decisions more slowly than the individuals forming them, and the organizations and states made up from them change even more slowly (at least it has been like this in the past).
Time scale separation implies that the system dynamics is determined by a few variables only, which are typically related to the higher hierarchy levels. Monarchies and oligarchies are good examples for this. In current socio-political and economic systems, however, we observe the trend that higher hierarchical levels show accelerating speeds of adjustment, such that the lower levels can no longer adjust more quickly than the higher levels. This may eventually destroy time scale separation, such that many more variables start to influence the system dynamics. The result of such mutual adjustment attempts on different hierarchical levels could be turbulence, "chaos," or a breakdown of synchronization. In fact, systems often get out of control, if the adjustment processes are not quick enough and responses to changed conditions are delayed.