Friday, 12 December 2014

NETWORKED MINDS: Where human evolution is heading

by Dirk Helbing  [1]
Having studied the technological and social forces shaping our societies, we are now turning to the evolutionary forces. Among the millions of species on earth, humans are truly unique. 
What is the recipe of our success? What makes us special? How do we decide? How will we further evolve? What will our role be, when algorithms, computers, machines, and robots are getting ever more powerful? How will our societies change?

In fact, humans are curious by nature – we are a social, information-driven species. And that is why the explosion of data volumes and processing capacities will transform our societies more fundamentally than any other technology has done in the past.

We continue FuturICT’s essays and discussion on Big Data, the ongoing Digital Revolution and the emergent Participatory Market Society written since 2008 in response to the financial and other crises. If we want to master the challenges, we must analyze the underlying problems and change the way we manage our technosocio- economic systems. Last week we discussed: SOCIAL FORCES: Revealing the causes of success or disaster.


Philosophers and technology gurus are becoming increasingly worried about our future. What will happen if computer power and artificial intelligence (AI) progresses so far that humans can no longer keep up? While a century ago some companies maintained departments of hundreds of people to perform calculations for business applications, for decades a simple calculator has been able to do mathematical operations quicker and more accurately than humans. Computers now beat the best chess players, the best backgammon players, the best scrabble players, and players in many other strategic games. Computer algorithms already perform about 70% of all financial trades, and they will soon drive cars better than humans. 

Will we have artificial super-intelligences or super-humans?


Elon Musk, the CEO of Tesla Motors, recently surprised his followers with a tweet saying that artificial intelligence could "potentially be more dangerous than nukes." In a comment on "The Myth of AI," he wrote:[2]

"The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast – it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. Please note that I am normally super pro technology, and have never raised this issue until recent months. This is not a case of crying wolf about something I don't understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital super-intelligences and prevent bad ones from escaping into the Internet. That remains to be seen..." 
So, what will be the future of humans? Will we be enslaved by super-intelligent robots or will be have to upgrade ourselves to become super-humans? Will we be technologically enhanced humans, so-called cyborgs? While all of this sounds like science fiction, given the current stage of technological developments such scenarios can't be fully excluded. In general, it's pretty safe to say that everything that can happen is actually likely to happen sooner or later.[3] However, in the following, I would like to point out another scenario, which I believe is of much greater importance: a scenario of collective inteligence, enabled by the emergence of shared information flows.
It's certainly true that digital devices and information systems are increasingly changing human behaviour and interactions. Just observe how many people are staring at their smartphones while walking in town or even when hanging out with their friends. So, if we want to understand better how the digital revolution might change our society, we must identify the various factors that influence our decision-making. In particular, we need to find out how growing amounts of information and the increased interconnectedness of people may change our behaviour. 

One of the best-known models of human decision-making so far is that of the “homo economicus.” It is based on the assumption of perfect egoists, i.e. selfish, rational, utility-maximizing individuals and firms, where the "utility function" is imagined to represent payoffs (i.e. earnings) or stable individual preferences. Related to this, any behaviour deviating from such selfishness is believed to create disadvantages. It is straightforward to conclude that humans or companies who aren't selfish ultimately lose the evolutionary race with selfish ones. So, natural selection should eliminate other-regarding behaviour as a consequence of the principle of the "survival of the fittest." So we should all act selfishly and optimize our payoff.

The hidden drivers of our behaviour


Surprisingly, empirical evidence is not well compatible with this perspective (see Information Box 1). Therefore, I am offering here a novel, multi-dimensional perspective on human decision-making: I claim that self-regarding rational choice is just one way of decision-making people are capable of and that human decisions are often driven by other factors. Specifically, I argue that people are driven by different incentive systems, and that their number increases with human evolution. 

The so-called neocortex is typically considered to be responsible for rational decision-making and the last important brain area that has developed. Before, other parts of the brain areas (such as the cerebellum) were in control – and may still be from time to time... So, I claim that there are many other drivers that govern people's behaviours, too.

It is clear that, first of all, our body has to make sure that we take care of our survival, i.e. we look for water and food. For this, our body comes up with the feelings of hunger and thirst. If one hasn't had water or food for a long time, it will be pretty difficult to focus on mathematical calculations, strategic thinking, or maximizing a payoff function. 

A similar thing applies to sexual desires. There is obviously a natural incentive to promote reproduction, and for many people long-term abstinence can lead to sexual fantasies occupying their thinking. Trying to find sexual satisfaction can be a very strong drive of human behaviour. This explains some pretty irritating behaviours of sexually deprived people, which are often discussed away as "irrational." 

Sex, drugs and rock 'n roll


Similar things can be said about the human desire to possess. Our distant ancestors were hunters and gatherers. Accumulating food and other belongings was important to survive difficult times, to enable trade, and to gain power. This desire to possess can, in some sense, be seen as the basis of capitalism. 

But besides the desire to possess things, some of us also like to experience adrenaline kicks. These were important to prepare our bodies for fights or for fleeing from predators and other dangers. Today, people watch crime series on TV or play shooter games to get the thrill. Like sexual satisfaction, the desire to possess and adrenaline kicks come along with emotions: greed and fear. Financial traders know this very well.

Hunger for information


Intellectual curiosity is a further driver of our behaviour that comes primarily into play, when the previously mentioned needs are sufficiently satisfied. Curiosity serves to explore our environment and to reveal its success principles. By understanding how our world works, we can manipulate it better to our advantage. A trade-off between exploration and exploitation is part of all long-term reward maximizing algorithms. Individuals who only rely on known sources of rewards are quickly be outcompeted by those who explore and find richer sources to exploit. To make sure that we make sufficient efforts to study our environment, our brain rewards insights by hormone flashes, for example, dopamine-based ones. The effect of these hormones is excitement. In fact, as intellectuals and other people know, thinking can create great pleasure. 

Lessons learned


In summary, our body has several different incentive and reward systems. Many of them are related with intrinsic hormonal, emotional, and nervous processes (the latter including the Amygdala brain area and the solar plexus). When neglecting these factors, I claim, human behaviour cannot be well understood. Hence, a realistic description of human decision-making must take knowledge from the sciences studying brain and body into consideration.

For example, why do many people spend much time and energy on sports to an extent that has little material or reproductive benefits? Why do people buy fast and expensive cars that do not match their stated preferences? Why do people race or fight, ride rollercoasters or do bungee jumping? It's the adrenaline kicks that can explain it! This is also the reason why the principle of "bread and games" is so effective in satisfying people. 

The above observations have important implications: humans cannot be simply grasped as payoff maximizers, but as individuals who have evolved to maximize their success in many dimensions, which are often incompatible. They are driven by a number of different incentive and reward systems. In the evolutionary game of survival, reproduction, spreading of ideas, and other things that matter, different strategies can co-exist. Thus the influence of each of these reward systems is likely to be different from one person to the next. This implies different preferences and personalities ("characters"). While some people are driven to possess as much as they can, others prefer to explore their intellectual cosmos, and again others prefer bodily activities such as sex or sports. If nothing grants satisfaction for a long time, the consequence might be to use drugs, get sick, or even die 

Suddenly, "irrational behaviour" makes sense


In other words, when going beyond the concept of self-regarding rational choice, it suddenly becomes clear why there are intellectuals, sportsmen, vamps, divas and other extremely specialized people. In such cases, one reward system dominates the others. For most people, however, all drives are important. But they just don't sum up to define a personal utility function that is stable in time. Instead, each drive is given priority for some time, while the others have to stand back. Once the prioritized drive has been satisfied, another desire is given priority etc. We may compare this a bit with the way different traffic flows are served at an intersection – one after another. Once a vehicle queue has been cleared, another one is prioritized by giving it a green light. Similarly, when one of our drives has been satisfied, we give priority to another one, until the first drive becomes strong again and demands our attention. 

We can also understand what happens, if people are deprived, i.e. cannot satisfy one of their drives for one reason or another. In such cases, it makes sense that they try to get satisfaction from other kinds of activities, which is called compensation. Such a situation applies, for example, to people in poor economic conditions. 

If unable to experience intellectual pleasures (due to lack of education), to satisfy the desire to possess (by consumption), and to gain social recognition, adrenaline kicks will become relatively more important. Therefore, these people might engage more in violence, crime, or drug consumption, as they lack alternatives to find satisfaction. Such deprivation may also explain crime statistics or hooliganism in sports. Therefore, understanding human nature will enable entirely new cures of long-standing social problems, and it allows us all to benefit, too! 

Multi-billion dollar industries for each desire


It turns out that our societies have organized our whole lives around the various incentives driving human behaviour. In the morning, we have breakfast to eat and drink. Then, we go to work to earn the money we want to spend on shopping, thereby satisfying our desire to possess. Afterwards, we may do sports to get our adrenaline kicks. To satisfy our social desires, we may meet friends or watch a soap opera. At the end of the day, we may read a book to stimulate our intellect and have sex to satisfy this desire, too. In conclusion, I dare to say that, most of the time, people's behaviours are not well described by strategic optimization of one utility function that is stable in time.[4] Therefore, the basis of our currently established decision theory is flawed. Nevertheless, our economy is surprisingly well fitted to human nature!

Interestingly, we have created multi-billion-dollar industries around each of our drives, but so far, scientists haven't mostly seen it this way. We have built a food industry, supermarkets, restaurants and bars to satisfy our hunger and thirst, shopping malls to satisfy our desire to possess, stadia to get adrenaline kicks by watching our favourite sports team or by doing sports ourselves. We have a porn industry and perhaps prostitution to help satisfy sexual desires. And we read books, solve riddles, travel to cultural sites, or participate in interactive online games to stimulate our intellect and satisfy our curiosity. This is what our media and tourism industries are for.

Note, however, that there is a natural hierarchy of desires, and this explains the order in which these industries came up. Therefore, each newly emerging industry also changes the character of our society: it gives more weight to desires that were previously in the background. So, what are the drives that will determine our future society? 

The currently fasted growing economic sector is Information and Communication Technology. So, after all our other needs have been taken care of, we are now building a new industry to satisfy the desires of the "information-driven species" that we are. This trend will give everything related to information a much higher weight. In other words, the digital society to come will be much more determined by ideas, curiosity and creativity. But not only this...


Being social is rewarding, too


Humans are not only driven by the above mentioned reward systems. We are also social beings, driven by social desires. In fact, most people have empathy (compassion) – they feel with others. Empathy is reflected by emotions and expressed to others by mimics. It even seems that humans all over the world share a number of facial expressions (anger, disgust, fear, happiness, sadness, and surprise). According to Paul Ekman (*1934), these expressions are surprisingly universal, i.e. independent of language and culture. However, our social desires go further than that. For example, we seek social recognition. 

I argue that the increasing networking of people, supported by Social Media such as Facebook, Twitter and WhatsApp, have the potential to fundamentally change our society and economy. Such social networking through information and communication systems can potentially stimulate our curiosity, strengthen our social desires, and enable collective intelligence, if the information systems are well designed. The main reason for this is that, nature created us as social beings and "networked minds.” 

The evolution of "networked minds"


It's an interesting question to ask, why we are social beings at all? Why do we have social desires? And how is this compatible with the previously mentioned principles of selfishness and survival of the fittest? To study this, we developed a computer simulation describing interactions of utility-maximizing individuals, exposed to the merciless forces of evolution. Specifically, we simulated interactions of individuals facing a so-called "Prisoner's Dilemma" – a particular social dilemma situation, where it would be favourable for everyone to cooperate, but where non-cooperative behaviour is tempting and cooperative behaviour is risky. In Prisoner's Dilemma interactions, the selfish "homo economicus" would never cooperate, as non-cooperative behaviour creates more payoff. This, however, destabilizes cooperation and produces an outcome that is bad for everyone. Although nobody wants this, the desirable state of cooperation breaks down pretty much as free traffic flow breaks down on busy roads – each agent seeks small advantages to themselves that collectively make everyone worse off. The result is a "tragedy of the commons." In other words, the favourable outcome of cooperation does not occur by itself, and instead, an undesirable outcome results. 

In our computer simulations of the Prisoner's Dilemma interactions, we distinguished the actual behaviour – cooperation or not – from the preferred behaviour. We assumed that the preferred behaviour results from a trait determining the degree of other-regarding preferences, which we called the "friendliness." Our computer agents, which represented the individuals, were assumed to decide according to a best-response rule, i.e. to choose the behaviour that maximized their utility function, given the behaviours of their interaction partners (their neighbours). This assumption was mainly made to be acceptable to mainstream economics. The utility function was specified such that it allowed to consider not only the own payoffs. It was possible to give some weight to the payoffs of their interaction partners, too. This weight represented the "friendliness" and was set to zero for everyone at the beginning of the simulation. So, initially the payoff of others was given no weight, and everyone was unfriendly.

Furthermore, the friendliness trait was assumed to be inherited to offspring (either genetically or by education). In our computer simulations, the likelihood to have an offspring increased exclusively with their own payoff, not the utility. The payoff was set to zero, when a co-operating agent was exploited by all neighbours (i.e. if none of them cooperated). Therefore, such agents never had any offspring. 

Finally, if agents earned payoffs and had offspring, the inherited friendliness value tended to be that of the parent, but there was also a certain natural mutation rate, which was specified such that it did not promote friendliness. 



So, what results did our computer simulations produce? The prevailing outcome of the evolutionary game-theoretical computer simulations was indeed a self-regarding, payoff-maximizing "homo economicus," as expected. However, this applied only to most parameter combinations of our simulation model, not all of them (see figure above). When offspring tended to live close to their parents (i.e. intergenerational migration was low), a friendly "homo socialis" with other-regarding preferences resulted instead! Interestingly, this fits the conditions under which humans actually raise their children. 

This evolution of other-regarding preferences (not just other-regarding behaviour, i.e. cooperation) is quite surprising. Even though none of the above model assumptions promotes cooperative behaviour or other-regarding preferences in separation, in combination they are nevertheless creating socially favourable behaviour. This can only be explained as result of interaction effects between the above rules. Another interesting finding is the evolution of "cooperation between strangers," i.e. the occurrence of cooperation between genetically non-related individuals. Video illustrating this (see also the related figure below).


Making mistakes is crucial 


How can we understand the surprising evolution of other-regarding preferences? We need to recognize that random mutations generate a low level of friendliness by chance. This slight other-regarding preference creates conditionally cooperative behaviour. That is, if enough neighbours cooperate, a "conditional co-operator" will be cooperative as well, but not so if too many neighbours are uncooperative. 

Unconditionally cooperative agents with a high level of friendliness are born very rarely, and only by chance. These "idealistic" individuals will usually be exploited, have very poor payoffs, and no offspring. However, if born into a neighbourhood with enough agents, who are sufficiently friendly to be conditionally cooperative, an unconditionally cooperative "idealist" can trigger a cooperative behaviour of neighbours in a cascade-like manner.[5]



In the resulting cooperative neighbourhood, high levels of friendliness are passed on to many offspring such that other-regarding preferences spread. This holds, because greater friendliness now tends to be profitable, in contrast to the initial stage of the evolutionary process, when friendly people were rare outliers and lonely outsiders. In the end, co-operators earn higher payoffs on average than non-cooperative agents: if everyone in the neighbourhood is friendly, everyone has a better life. Therefore, while the "homo economicus" earns more initially, the finally resulting "homo socialis" eventually beats the "homo economicus" (see figure above). In the end, the friendliness levels are broadly distributed (see figure below). This explains the heterogeneous individual preferences that are actually observed: in reality, everything from selfish to altruistic preferences exists.



Note that in the situation studied above, where everyone starts as a non-cooperative "homo economicus," no single individual can establish profitable cooperation, not even by optimizing decisions over an infinitely long time horizon. It takes several "friendly" deviations in the same neighbourhood to trigger a cascade effect that eventually changes the societal outcome. One can show that a critical number of interacting individuals is needed to be friendly and cooperative by coincidence. Therefore, the "homo socialis" can only evolve thanks to the occurrence of random "mistakes" (here: the birth of "idealists" who are initially exploited by everyone). However, given suitable feedback effects, such "errors" enable better outcomes. Here, they eventually produce an "upward spiral" to cooperation with high payoffs. Thereby, idealists make it possible to overcome the "tragedy of the commons." 

"Networked minds" require a new economic thinking


The most important implication of the evolution of other-regarding preferences is that, by considering the payoff and success of others, decisions become interdependent. Therefore, while methods from statistics for independent, un-correlated events may sometimes suffice to characterize decisions of the "homo economicus," we need complexity science to understand the interdependent decision-making of the "homo socialis." In fact, the "homo socialis" may be best characterized by the term "networked minds."

In agreement with the findings of social psychology, the "homo socialis" is capable of empathy and often puts himself or herself into the shoes of others. By taking into account the perspective, interests, and success of others, "networked minds" consider externalities of their decisions. That is, the "homo socialis" decides differently from the "homo economicus." While the latter would never cooperate in a social dilemma situation, the "homo socialis" is conditionally cooperative, i.e. tends to cooperate if enough neighbours do so as well. Therefore, the "homo socialis" is able to align competitive individual interests and to make the individual and system optimum better compatible with each other. 

This makes the "homo socialis" superior to the "homo economicus," even if we measure success in terms of individual payoffs. While the Invisible Hand often doesn't work for the "homo economicus" in social dilemma situations, as we have seen, the "homo socialis" manages to make the Invisible Hand work by considering externalities. Therefore, while increasing the individual utility, the "homo socialis" manages to create systemic benefits, too, in contrast to the "homo economicus." Interestingly, the successful cooperative outcome emerging for the "homo socialis" is not the result of an optimization process, but rather of an evolutionary process. 

All the above calls for a new economic thinking ("economics 2.0"), and even enables a better organization of economy, as I will discuss it in the next chapter (see also Information Box 2). I strongly believe that we are heading towards a new kind of economy, not just because the current economy will not provide enough jobs anymore in many areas of the world, but also because information systems and social media are opening up entirely new opportunities. Moreover, to cope with the increasing level of complexity of our world, we need to enable collective intelligence, fostering not just the brightest minds and best ideas, but also learning how to leverage the hugely diverse range of experiences and expertise of people in parallel. And this again needs "networked minds."

The wisdom of crowds


Since the "wisdom of the crowd" was first discovered and demonstrated, people have been amazed by the power of collective intelligence. The "wisdom of the crowd" reflects that the average of many independent judgments is often superior to expert judgments. A frequently cited example first reported by Sir Francis Galton (1822-1911) is the estimation of the weight of an ox. Galton observed villagers trying to estimate the weight of an ox at a country fair, and noted that, although no one villager guessed correctly, the average of everyone’s guesses was extremely close to the true weight. Importantly, today the wisdom of crowds is considered to be the underlying success principle of democracies and financial markets. Of course, an argument can also be made for the "madness of crowds." In fact, when people influence each other, the resulting group dynamics can create very bad outcomes. When individuals copy each other, misjudgements can easily spread. For the wisdom of crowds to work, independent information gathering and decision-making are crucial. The design of the decision mechanism determines, whether the result of many decisions will be a success or failure (see Information Box 3).

The Netflix challenge


One of the most stunning examples for collective intelligence is the outcome of the Netflix challenge. Based on movie ratings by their customers, Netflix was trying to predict what movies they would love to see. But the predictions were frustratingly bad. So, back in 2006, Netflix offered a prize of 1 million US dollars to the team that was able to improve their own predictions of user-specific movie ratings by more than 10 percent. About 2,000 teams participated in the challenge and sent in 13,000 predictions. The training data contained more than 100 million ratings of almost 20,000 movies on a five-star scale by approximately 500,000 users. Netflix' own algorithm produced an average error of about 1 star, but it took three years to improve it by more than 10 percent. 

In the end, "BellKor's Pragmatic Chaos team" won the prize, and a number of really remarkable lessons were learned: First, given that it was very difficult and time-consuming to improve only 10 percent over the standard method, Big Data analytics isn't that powerful in predicting people's preferences and behaviours. Second, even a minor improvement of the algorithm by only 1 percent created a significant difference in the top-10 ranked movies that were predicted for the users. In other words, the results were very sensitive to the method used (rather than stable). Third, no single team was able to achieve a 10 percent improvement alone. 

A step change in performance was only made when the best-performing predictions were averaged with predictions of teams that weren't as good. That is, the best solution is actually not the best – averaging over diverse and independently gained solutions beats the best solution. This is really counter-intuitive: nobody is right, but together with others one can do a better job! The mechanism for this is subtle and extremely important for collective wisdom. Although each of the top teams had made almost a 10% improvement over the original algorithm, each used different methods that were able to find different patterns in the data. No single algorithm could find them all. By averaging the predictions, each algorithm contributed the knowledge it was specialized to find, and the errors of each algorithm were suppressed by the others. Thus, when complex tasks must be solved, specialization and diversity are key!

Actually, things were even more surprising than that: when giving better predictions a higher weight, it typically didn't improve the predictions. Researchers have argued that this is because weighting more successful algorithms more highly only works if at least one algorithm is correct. But in this case no single algorithm was perfect, and an equal combination was better than any solution alone or a weighted average that considered the relative ranks of the algorithms. This is probably the best argument for equal votes – but equal votes for different solution approaches, not for people! In other words, one should not favour majority solutions. Compared to our way of decision-making today, minority votes would need to have a higher weight – such that they enter the decision-making process. That would correspond to a democracy of ideas rather than a democracy of people. In other words, to take the best possible decisions, we would have to say good-bye to two approaches that are common today: first, the principle that the best expert takes the decision in a top-down way; second, the principle of majority voting. Therefore, if we want to take better decisions, we must question both, the concept of the "wise king" (or "benevolent dictator") and the concept of democracies based on majority opinions. This is shocking!

How to create collective intelligence


So, how could we create a better system? How can we unleash the power of "collective intelligence"? First, we have to abandon the idea that our reality can be well described by a single model – the best one that exists. In many cases, such as traffic flow modeling, there are several similarly performing models. This speaks for a pluralistic modeling approach. In fact, when the path of a hurricane is predicted or the impact of a car accident is simulated in a computer, an average of several competing models often provides the best prediction. 

In other words, the complexity of our world cannot be grasped by a single model, mind, computer, or computer cluster. Therefore, it's good if several groups, independently of each other, try to find the best possible solution. These, however, will always give an over-simplified picture of our complex world. Only if we put the different perspectives together, then we can get a result that approximates the full picture well. We may compare this with visiting an artfully decorated cathedral. Every photograph taken can only reflect part of its complexity and beauty. One photographer alone, no matter how talented or how well equipped, cannot capture the full 3D structure of the cathedral with a single photograph. A full 3D picture of the cathedral can only be gained by combining many the photographs representing different perspectives and projections.

Let's discuss another complex problem, namely the challenge to find the right insurance for you. It will certainly be impossible for any consumer to read the small print and detailed regulations of all available insurances. Instead, you would probably ask your colleagues and friends what experiences they have made with their insurances, and then evaluate the most recommended ones in detail to find the right insurance for you. Insurance companies that provide bad coverage or service create bad word-of-mouth reviews, making them less likely to be chosen by others. In other words, we evaluate insurances collectively, thereby mastering a job that nobody could do alone. In the Internet age, this word-of-mouth system is increasingly replaced by online reviews and price comparison websites, which widens the circle of people contributing additional information and improves the chances for each individual to take better decisions.

While this approach is able to create additive knowledge, science has found ways to create knowledge that is more than just a sum of all knowledge. In fact, when experts discuss with each other or engage in an exchange of ideas, this often creates new knowledge. The above examples illustrate how collective intelligence works: one needs to have a number of independent teams, which tackle a problem in separation, and after this, the independently gained knowledge needs to be combined. When there is too much communication in the beginning, each team is tempted to follow the successes of others, reducing the number of explored solutions. But when there is too little communication at the end, it's not possible to fully exploit all the solutions that have been found.

At this place, it is also interesting to discuss how "cognitive computing" works in IBM's Watson computer. The computer scans hundreds of thousands sources of information, for example, scientific publications, and extracts potentially relevant statements. But it can also formulate hypotheses and seek evidence for or against them. It then comes up with a list of possible answers and ranks them according to their likelihood. All of this is done using algorithms based on the laws of probability: how probable is this hypothesis given the observed data? These laws codify precisely and mathematically the type of reasoning humans informally perform when making decisions. However, Watson loses less information due to cognitive biases, or through the limited time and attention span humans have. For example, when used in a medical context, Watson would come up with a ranked list of diseases that are compatible with certain symptoms. A doctor will probably have thought of the most common diseases already, but Watson will also point the attention to rare diseases, which may otherwise be overlooked. 

Importantly, to work well, Watson should not be fed with consistent information. It must get unbiased information reflecting different perspectives and potentially even contradictory pieces of evidence. Watson is then trained by experts to weight evidence and sources of information in ways that are increasingly consistent with current wisdom. In the end, Watson may be doing better than humans. The power of Watson is in the sheer number of different sources of information it can scan, and the number of hypotheses it can generate and evaluate, both orders of magnitude above any single human. Humans tend to seek and attend to information that confirms their existing beliefs; Watson is largely immune to this bias. Humans tend to weight evidence more highly when it confirms their beliefs; Watson evaluates every piece of information algorithmically, according to the laws of probability.

Let us finally address a question that thrills many people these days: Using the future Internet, could we create something like a globe-spanning super-intelligence? In fact, the Google Brain project may want to establish such a super-intelligence, based on Google's massive data of our world. However, what we have discussed above suggests that it is important to have different independent perspectives – not just one. So, having many brains is probably superior to having one super-brain. Remember, the "wisdom of crowds" is often outperforming experts.[6] This implies a great potential of citizen science. Collective intelligence can beat super-intelligence, and a diversity of perspectives is key to success. Therefore, to master the complex challenges of the future, we need a participatory approach, as I we will discuss it in the next chapter.
There is more to come: New dimensions of life

To conclude, diversity is a major driving force of evolution, and has always been. Over millions of years, diversity has largely increased, creating a growing number of different species. Diversity drives differentiation and innovation, such that new dimensions of life are created. Eventually, humans became social and intelligent beings, and cultural evolution set in. The slow evolution of genetic fitness was then complemented by an extremely fast evolution of ideas. One might therefore even say that, to a considerable extent, humans have emancipated themselves from the limitations of matter and nature. The spreading of ideas, of so-called "memes," has become more important than the spreading of genes. Now, besides the real world, digital virtual worlds exist, such as massive multi-player on-line games. So, humans have learned to create new worlds out of nothing but information. The multi-player online games Second Life, World of Warcraft, Farmville, and Minecraft are just a few examples for this. 

It is equally fascinating that, with these digital worlds, new incentive systems have evolved, too. We are perhaps not so surprised that some people care about their position in the Fortune 500 list of richest persons, because it reflects their financial power in our real world. But people feel not just competitive about money. Tennis players and soccer teams strongly care about their ranking. Actors live on the applause they get, and scientists care about citations, i.e. the number of references to their work. 

So, people do not only respond to material payoffs such as money, and the various other drives we have discussed before. It turns out that many people also care about the scores they reach in gaming worlds. Even though some of these ranking scales don't imply any immediate material or other real-world value, they can motivate people to make an effort. It's pretty surprising how much time people may spend on increasing their number of Facebook friends or Twitter followers, or their klout score. Obviously, social media offer new opportunities to create multi-dimensional reward systems, as we need them to enable self-organizing socio-economic systems. 

There is little doubt: we are now living in a cyber-social world, and the evolution of global information systems drive the next phase of human social evolution. Information systems support "networked minds" and enable "collective intelligence." Humans, computers, algorithms and robots will increasingly weave a network that may be characterized as "information ecosystem," and therefore one question becomes absolutely crucial: "How will this change our socio-economic system?"



INFORMATION BOX 1: How selfish are people really?

Our daily experience tells us that many people do unpaid jobs for the benefit of others. A lot of volunteers work for free, some organize themselves in non-profit organizations. We also often leave tips on the restaurant table, even if nobody is watching and even if we'll never return to the same place (and that's true also for countries, where tips are not kind of obligatory as in the USA). Furthermore, billionaires, millionaires and normal people make donations to promote science, education, and medical help, often in other continents. Some of them do it even anonymously, i.e. they will never get anything in return – not even recognition.

This has, indeed, puzzled economists for quite some time. To fix the classical paradigm of rational choice based on selfish decision-making, they eventually assumed that everyone would have an individual utility function, which reflects personal preferences. However, as long as there is no theory to predict personal preferences, the concept of utility maximization does not explain much. Taking rational choice theory seriously, it claims that people, who help others, must have fun doing so, otherwise they wouldn't do it. But this appears to be a pretty circular conclusion.


Ultimatum and Dictator Games



In order to test economic theories and understand personal preferences better, scientists have performed ever more decision experiments with people in laboratories. Their findings were quite surprising and totally overhauled previously established economic theories. In 1982, Werner Güth developed the "Ultimatum Game" to study stylized negotiations. In related experiments, for example, 50 dollars are given to one person (the "proposer"), who is asked to decide how much of this money he or she would offer to a second person (the "responder"). If the responder accepts the amount offered by the proposer, both get the respective share. However, if the responder rejects the offer, both walk home with nothing.
According to the concept of the self-regarding "homo economicus," the proposer should offer not more than 1 dollar, and the responder should accept any amount – better get a little money than nothing! However, it turns out that responders tend to reject small amounts, and proposers tend to offer about 40 percent of the money on average. A further surprise is that proposers tend to share with others in all countries of the world. Similar experimental outcomes are found when playing for a monthly salary. To reflect these findings, Ernst Fehr (*1956) and his colleagues proposed inborn principles of fairness preferences and inequality aversion. Others, such as Herbert Gintis, assumed a genetic basis of cooperation ("strong reciprocity").
There is also a simpler game, known as "Dictator Game", which is in some sense even more stunning. In this game, one person is asked to decide, how much of an amount of money received from the experimenter he or she wants to give to another person – it can also be nothing! The potential recipient does not have any influence on the outcome. Nevertheless, many people tend to share – on average about 20 percent of the money they receive from the experimenter. Of course, there are always exceptions in positive and negative direction – some people actually don't share.

The surprising overall tendency to share could, of course, result from the feeling of being observed, which might trigger behaviours complying with social norms. So, would such sharing behaviour disappear when decisions are taken anonymously? To test this, we made a Web experiment with strangers who never met in person. Both, the proposer and responder got a fixed amount of money for participating in the experiment. However, rather than sharing money, the proposer and responder had to decide how to share a certain work load: together, they had to do several hundred calculations! In the worst case, one of them would have to do all the calculations, while the other would get money without working! But to our great surprise, the participants of the experiment tended to share the workload in a rather fair way. Thus, there is no doubt: many people decide in other-regarding ways, even in anonymous settings. They have a preference for fair behaviour.




INFORMATION BOX 2: A smarter way of interacting, not socialism 


In contrast to today's re-distribution approach based on social benefit systems, the "homo socialis" should not be considered as a tamed "homo economicus", who shares some payoff with others. As we have discussed before, the "homo economicus" tends to run into "tragedies of the commons," while the "homo socialis" can overcome them by considering the externalities of decisions. So, the "homo socialis" can create more desirable outcomes and higher profits on average. Therefore, when taking decisions like a "homo socialis," we will often be doing well.


In social dilemma situations the "homo economicus," in contrast, tends to produce high profits for a few agents who exploit others, but poor outcomes for the great majority. Therefore, redistributing money of the rich can't overcome "tragedies of the commons" and can't reach average profit levels that are comparable to those of the "homo socialis".[7] In conclusion, an economy in which the "homo socialis" can thrive is much better than an economy, in which the "homo economicus" dominates and where social policies try to fix the damages afterwards. Therefore, the concept of the "homo socialis" has nothing to do with a re-distribution of wealth from the rich to the poor.


Let me finally address the question, whether friendly, other-regarding behaviour is more likely when people have a lot of resources and can "afford" to consider the interest of others, or whether it occurs under particularly bad conditions. In fact, in the desert and other high-risk environments, people can only survive by means of other-regarding behaviour. However, such behaviour can create benefits also in low-risk environments, where people can survive by themselves. This is so, because the consideration of externalities of the own behaviour brings the system optimum and the individual user optimum into harmony. In other words, when considering externalities, as the "homo socialis" does, the socio-economic system can perform better, creating on average higher advantages for everyone. Even in a world with large cultural differences across cities, countries and regions, it seems that countries and cities with a particularly high quality of life are those that manage to establish other-regarding behaviours and take externalities into account. As I said before, the emergence of friendly, other-regarding behaviour is to the own benefit, if just enough interaction partners behave in this way. It is, therefore, desirable to have institutions that protect the "homo socialis" from exploitation by the "homo economicus." Reputation systems are one such institution. They can promote desirable outcomes in a globalized world.


INFORMATION BOX 3: Crowds and swarm intelligence


In the past years, the concept of crowds and swarm intelligence has increasingly fascinated the public and the media. At the time of Gustave Le Bon (1841-1931), the idea came up that people had something like a shared mind. However, the attention was put on "the madness of crowds," on mass psychology that can create, for example, a rioting mob. This was seen to be a result of dangerous emotional contagion, and it is still the reason why governments tend to feel uneasy about crowds. But crowds can have good sides, too.
Today, we have a more differentiated picture of crowds and swarm intelligence. We know much better, when crowds perform well, and when they cause trouble. Simply put, if people gather information and decide independently from each other, and the information is suitably aggregated afterwards, this often creates better results than even the best experts can produce. This is also more or less the way, in which prediction markets work. These have been surprisingly successful in anticipating, for example, election outcomes or the success of new movies. Interestingly, prediction markets have been inspired by the principles that ants or bees use to find the most promising food sources. In fact, such social insects have always amazed people for their complex self-organizing animal societies, which are based on surprisingly simple interaction rules, as we know today.
In contrast to the above, it often undermines the "wisdom of crowds," when people are influenced while searching information or making up their minds. This is best illustrated by the Asch conformity experiments, in which an experimental subject had to publicly state, which one of three lines had an identical length as another line that was shown. However, before answering, other subjects were giving wrong answers. As a consequence, the experimental subject typically gave a wrong answer, too. Moreover, recent experiments I performed together with Jan Lorenz, Heiko Rauhut, and Frank Schweitzer show that people are influenced by opinions of others even when no group pressure is put on them.
What conclusions can we draw? First, one shouldn't try to influence others in their information search and decision making, if we want the "wisdom of crowds" to work. Second, good education is probably the best immunization against emotional contagion, and can therefore reduce negative effects of crowd interactions. And third, we must further explore, what decision-making procedures and institutions can maximize collective intelligence. This will be of major importance to master the increasingly difficult challenges posed by our complex globalized world. 

[1] I would like to thank Richard Mann for his useful comments on this chapter. 

[2] The comment on "The Myth of AI" was originally posted at http://edge.org/conversation/the-myth-of-ai but apparently deleted in the meantime. 

[3] The smaller the probability of the event, however, the longer it will usually take until it happens. 

[4] But note that nonetheless, people make informed trade-offs, such as avoiding to spend too much money on parties, if they have the goal to possess something. 

[5] One might think that this is what happened, for example, in the case of Jesus Christ. He preached to "love your neighbour as yourself," i.e. other-regarding behaviour weighting the preferences of others with a weight of 0.5. His idealistic behaviour was painful for himself. He ended on the cross like a criminal and without any offspring. But his behaviour caused other-regarding behaviour to spread in a cascade-like way, establishing a world religion. 

[6] While Google can easily implement many different algorithms, the very nature of a large corporation with its self-regarding goals, uniform standards, hiring practices and communications means that the teams developing these will be more prone to observe and follow each other’s successes, think in similar ways and thus produce less diverse opinions. 

[7] when the latter interact among each other

No comments:

Post a Comment

Note: only a member of this blog may post a comment.