Wednesday, 10 September 2014
HAVE WE OPENED PANDORA’S BOX? We must move beyond 9/11
This summary is not available. Please
click here to view the post.
Labels:
Big data,
Dirk Helbing,
Economics 2.0,
Elinor Ostrom,
Innovation Accelerator,
Internet of Things,
Planetary Nervous System,
Reputation System,
self regulating systems,
socio economic systems
Tuesday, 20 May 2014
THE WORLD AFTER BIG DATA: What the digital revolution means for us
By Dirk Helbing (ETH Zurich)
Never before
were politicians, business leaders, and scientists more urgently needed to
master the challenges ahead of us. We are in the middle of a third industrial
revolution. While we see the symptoms, such as the financial and economic
crisis, cybercrime and cyberwar, we haven't understood the implications well.
But at the end of this socio-economic transformation, we will live in a digital
society. This comes with breath-taking opportunities and challenges, as they
occur only every 100 years.
Big Data: a
magic wand. But do we know how to use it?
Let me start
with Big Data. When the social messaging portal WhatsApp with its 450 million users was recently sold, 19 billion
dollars were made -- almost half a billion dollars per staff member. Big Data
is fundamentally changing our world. It is becoming the new oil of the 21st
century, and we need to learn how to drill and refine it, i.e. how to produce
data and turn them into information, knowledge and wisdom.
The
potential of Big Data spans across all areas of society. It reaches from
natural language processing over financial asset management, to smartly
managing our cities and better balancing energy consumption and production,
thereby saving energy. It allows for better protection of our environment, risk
detection and reduction, and the discovery of opportunities, which would
otherwise be missed. It will be possible to tailor medicine to patients, thereby
increasing drug effectiveness while reducing side effects. Preventing diseases
may become even more important than curing them.
Big Data
applications are now spreading like wildfire. They enable personalized offers,
services and products. Big Data open up entirely new possibilities for process
optimization and allow one to identify unexpected interdependencies. They also
imply great potentials of evidence-based decision-making, but science will be
crucial to ensure transparency, quality, and trust. Science will also be
important to drive ethical ICT innovations and to avoid the pitfalls of Big
Data applications. Therefore, science must become a fifth pillar of
democracies, besides legislation, executive, jurisdiction, and the public
media.
What's the next big thing after Big Data?
But we need
to think a step ahead and realize that we are just at the beginning of a
transformation process, which is about to change human history. The invention
of the steam engine turned agricultural society ("economy 1.0") into
industrial society ("economy 2.0"), and wide-spread education turned
it into service society ("economy 3.0"). Now, the invention of
computers, the Internet, the World Wide Web, and Social Media are transforming
service societies into digital societies ("economy 4.0").
With
computers reaching the level of human brainpower in about 10 years, with
intelligent service robots, and the Big Data tsunami, 50 percent of jobs in the
industrial and service sectors will probably be lost within the next 20 years.
And most of our current ways of doing things will fundamentally change: the way
we educate (MOOCS – Massively Open On-line Courses – and personalized education), the way
we do research (Big Data analytics), the way we move (self-driving Google cars) or transport goods
(drones), the way we go shopping (take Amazon
and eBay), the way we produce (3D
printers), but also our health system (personalized medicine), and most likely
politics (participation of citizens) and the entire economy as well (with the
makers community, the emerging sharing economy, and prosumers, i.e.
co-producing consumers). Financial business, which used to be the domain of
banks, is increasingly replaced by algorithmic trading, Paypal, Bitcoin, and Google Wallet, etc. Moreover, the
biggest share of the insurance business is now in financial products such as
credit default swaps. Even wars may increasingly change from conventional wars
to cyberwars.
Thus,
how will the digital revolution transform our societies?
First of
all, the transition will be challenging. Today's world is struggling with
financial instabilities, and in many areas of the world, we are faced with
social and political unrest -- sometimes framed as "Twitter revolutions". Thus, how can we handle this? Do we need
more state power, based on armed police and mass surveillance? Could a giant
supercomputer (or network or cloud of supercomputers), fueled with massive
amounts of data about human activities and almost everything, simulate our
globe? Could a supercomputing infrastructure like this optimize and plan our
world? Could it avoid the traps of particular interests, irrationality, and
emotional decision-making? Could it find ways to overcome coordination and
market failures, breakdowns of cooperation, and conflict? Could it take better
decisions than we could do? And should it determine our actions through
personalized recommendations and selective information that smartphones or
other gadgets deliver to us?
To some or
even many of us, this seems plausible, but this concept, known as
"benevolent dictator" or "big government" cannot work.
While the processing power doubles every 1.8 years, the amount of data doubles
only every 1.2 years. Unfortunately, the complexity of networked systems is
growing even faster (see figure above). In other words, attempts to optimize
systems in a top-down way will be less and less effective – and cannot be done
in real time. Paradoxically, as economic diversification and cultural evolution
progress, a big government approach would increasingly fail to lead to good
decisions. However, neither is simplifying our world by homogenization and
standardization a solution – since it reduces innovation, societal resilience,
and the happiness of people in general. Today, everyone already complains about
over-regulation, and we can no longer pay for the expensive institutions needed
for it. Most industrialized countries have reached historical heights in public
debt levels in the order of 100 to 200 percent of their annual productivity.
Nobody knows how we should ever be able to pay for this – and for even more
regulation.
But what
alternatives do we have?
The logical
answer is: distributed (self-)control, i.e. bottom-up self-regulation, as
envisioned by Adam Smith's paradigm of the invisible hand. While this vision
was often not working well in the past due to coordination and market failures,
cybernetics (i.e. control theory) and complexity theory tell us that it is
actually feasible to create resilient social and economic order by means of
self-organization, self-regulation, and self-governance. The work of Nobel
prize winner Elinor Ostrom and others has demonstrated this. By "guided
self-organization" we can let things happen in a way that produces
desirable outcomes in a flexible and efficient way. One should imagine this
embedded in the framework of today's institutions and stakeholders which,
however, will learn to interfere in minimally invasive ways.
How will such
self-regulation work?
In a rapidly
changing world, which is hard to predict and plan, we must create feedback
loops that enable systems to flexibly adapt in real time to local conditions
and needs. Now, 300 years after Adam Smith's historical vision, we can make it
happen, fueled by real-time data. For example, my research team has invented
self-regulating traffic lights, which are driven by the traffic flows and can
outperform the classical top-down control by a conventional traffic center. Can
we transfer and extend this principle to socio-economic systems? Indeed, we are
now developing mechanisms to overcome coordination and cooperation failures,
conflicts, and other age-old problems. This can be done with suitably designed
social media and sensor networks for real-time measurements, which will
eventually weave a Planetary Nervous System. Hence, we can finally realize the
dream of self-regulating systems, and there is now a quickly increasing number
of examples for them: Bitcoin, peer
to peer lending, Google's
self-driving car, Uber's limousine
service, collaborative robot swarms, and social communities on the Web.
A new kind of
economy is born
A largely
self-regulating society isn't utopia. In fact, a new kind of economy is already
on its way. Social media are networking people and, thereby, enable
"collective intelligence." This paradigm is superior to the
self-regarding optimization by the "homo economicus", the egoistic
decision-maker assumed in mainstream economics ("economics 1.0").
While the bottom-up self-organization of the "homo economicus" can
outperform top-down decision making in complex environments, highly competitive
conditions can lead to coordination failures and poor outcomes (for example,
"tragedies of the commons" such as environmental degradation). It has
been theoretically and empirically shown, however, that a considerable fraction
of people has other-regarding preferences -- I will call this type "homo
socialis." To understand the decisions of this type, a new economic
thinking ("economics 2.0") is needed compared to the purely selfish
"homo economicus," which is the basis of the current mainstream
economics (economics 1.0). Considering the impact of the own decisions on
others enables self-regulation, which can overcome the above mentioned
coordination failures and "tragedies of the commons." Interestingly,
suitable institutions such as certain social media -- combined with suitable
reputation systems -- can promote other-regarding decision-making. The quick
spreading of social media and reputation systems, in fact, indicates the
emergence of a superior organizational principle, which creates collective
intelligence by harvesting the value of diversity. Properly designed social
media allow diverse knowledge and skills to come together, thereby unleashing
creativity, social capital and productive value.
Hence, in
accordance with the paradigm of distributed control and self-regulation, a participatory market society is on the
rise. While the 20th century was an era of democratization of consumption, the
21st century can become an era of democratization of production. Next to today’s
companies, we see the emergence of an innovation ecosystem characterized by
flexible, participatory forms of production, which I term "projects".
Here, creative minds come together to realize joint project ideas. After
completing a project, everyone looks for another one, and so on. Social media
platforms such as Amazon Mechanical Turk
make it possible to bring ideas and skilled workers together. As a consequence,
this leads to a more direct participation of people in production processes
("prosumers"). Over time, there will be a much greater diversity of
products, tailored to individual needs. Thus, while computers will increasingly
replace our current types of routine and executive work, we will have an
opportunity to replace these jobs by more creative activities. Production by
large corporations will then be complemented by an innovation ecosystem made up
of millions of projects. The huge range of smartphone apps that low-cost
downloads from App stores have
enabled, gives just a first idea of the unlimited possibilities for new
projects. Open access data and the Web2.0,
Web3.0, etc. will further accelerate
this development.
The new
algebra of prosperity and leadership
The 21st
century will be governed by fundamentally different principles than the 20th
century, and that's why we need to change our way of thinking about the world.
To understand this, it is important to recognize the following facts and
trends: information is ubiquitous and everywhere instantly available, such that
borders dissolve. The "second machine age" comes with extreme speed.
Most of our knowledge is outdated, and we can't learn quickly enough to fully
understand the changing world without the help of smart devices such as "social information technologies."
Many systems become more variable, less predictable, and less controllable.
Their increased connectivity implies a higher complexity. The increase in data
volumes means we are overloaded by data that ultimately needs to be converted
into information and then into actionable knowledge. Furthermore, the more data
we produce, the less likely can we keep secrets and the cheaper will data
become. This means that we will make less profits on data, but more on
algorithms that turn data sets into useful information and knowledge. In such a
world, ideas will become more powerful, and ethics more important. Digitally
literate people will be better informed than experts used to be, therefore,
classical hierarchies will dissolve. Moreover, data can be replicated as often
as we like. It's a virtually unlimited resource, which may help to overcome
conflicts that scarce resources used to imply. However, services and products
will be more individualized, personalized, and user-centric. Finally, what used
to be science fiction may become reality. The countries first recognizing these
new principles and turning them into their advantage will be leading. Those
failing to adapt to these trends in a timely manner will be in trouble. We may
just have 20 years for this -- a very short time considering that planning and
building a road often takes 30 years or more.
What does it
take to master our future?
So far,
no country in the world seems to be well prepared for the digital era.
Therefore, we urgently need an Apollo-like program, and the equivalent of a
Space Agency for ICT: an Innovation
Alliance with the mission to develop the institutions and information
infrastructures for the emerging digital society. This is crucial to master the
challenges of the 21st century in a smart way and to unleash the full potential
of information for our society. For illustration, it is helpful to recall the
factors that enabled the success of the automobile age: the invention of cars
and of systems of mass production; the construction of public roads, gas
stations, and parking lots; the creation of driving schools and driver
licenses; and last but not least, the establishment of traffic rules, traffic
signs, speed controls, and traffic police. All of this required many billions
each year. We invest a lot into the agricultural sector, the industrial sector,
and also the service sector. But are we investing enough into the emerging
digital sector?
What are
the technological infrastructures and the legal, economic and societal
institutions needed to make the digital age a big success? This question would
set the agenda of the Innovation Alliance. A partial answer is already clear:
we need trustworthy, transparent, open,
and participatory ICT systems, which are compatible with our values. For
example, it would make sense to establish the emergent "Internet of Things" as a Citizen Web. This would enable self-regulating systems through
real-time measurements of the state of the world, which would be possible with
a public information platform called the "Planetary Nervous System". It would also facilitate a
real-time measurement and search engine: an open and participatory "Google 2.0."
To
protect privacy, all data collected about individuals should be stored in a Personal Data Purse and, given informed
consent, processed in a decentralized way by third-party Trustable Information Brokers, allowing everyone to control the
use of their sensitive data. A Micro-Payment
System would allow data providers, intellectual property right holders, and
innovators to get rewards for their services. It would also encourage the
exploration of new and timely intellectual property right paradigms ("Innovation Accelerator"). A
pluralistic, User-centric Reputation
System would promote responsible behavior in the virtual (and real) world.
It would even enable the establishment of a new value exchange system called
"Qualified Money," which
would overcome weaknesses of the current financial system by providing
additional adaptability.
A Global Participatory Platform would
empower everyone to contribute data, computer algorithms and related ratings,
and to benefit from the contributions of others (either free of charge or for a
fee). It would also enable the generation of Social Capital such as trust and cooperativeness, using
next-generation User-controlled Social
Media. A Job and Project Platform
would support crowdsourcing, collaboration, and socio-economic co-creation.
Altogether, this would build a quickly growing Information and Innovation Ecosystem, unleashing the potential of
data for everyone: business, politics, science, and citizens alike.
We could
also create a Digital Mirror World
to explore the likely risks and opportunities of prospective decisions.
Finally, Interactive Virtual Worlds
would realize the full creative potential within different socio-economic
settings and intellectual property right approaches. Social Information Technologies would help us to cope with the
diversity resulting from this and to benefit from it. Digital literacy and good education
will be more important than ever. But with the emerging "Internet of
Things" and participatory information platforms, we can unleash the power
of information and turn the digital society into an opportunity for everyone.
It just takes our will to establish the institutions required to make the
digital age a great success.
Are we ready
for this?
Labels:
Big data,
Dirk Helbing,
Economics 2.0,
Elinor Ostrom,
Innovation Accelerator,
Internet of Things,
Planetary Nervous System,
Reputation System,
self regulating systems,
socio economic systems
Monday, 28 April 2014
RISK OF WAR: WHAT, IF THE "BALANCE OF THREAT" IS UNSTABLE?
by Dirk Helbing (ETH Zurich)
Like many of
us, I have been raised in a period of cold war. Military threats were serious
and real, but the third world war did not happen – so far. This is generally
considered to be a success of the “balance of threat” (or “balance of terror”):
if one side were to attack the other, there would still be time to launch enough
intercontinental nuclear warheads to eradicate the attacker. With no side crazy
enough to risk eliminating itself, nobody would start such a war.
However,
what if this calculus is fundamentally flawed? There were at least three
instances within a 60 year period, where the world came dauntingly close to a
third world war. The Cuban missile crisis is just the most well-known, but there
were others that most of us did not hear about (see http://en.wikipedia.org/wiki/World_War_III).
Perhaps, we survived the strategy of nuclear deterrence just
by chance?
The
worrisome misconception is that only shifts in relative power can destabilize a
“balance of threat”. This falsely assumes that balanced situations, called
equilibria, are inherently stable, which is actually often not the case. For
illustration, consider the simple experiment of a circular vehicle flow (see http://www.youtube.com/watch?v=Suugn-p5C1M):
although it is apparently not difficult to drive a car at constant
speed together with other cars, the equilibrium traffic flow
will break down sooner or later. If only the density on the
traffic circle is higher than a certain value, a so-called "phantom
traffic jam" will form without any particular reason – no accident, no obstacles,
nothing. The lesson here is that dynamical systems can easily get out of control
even if everyone has good information, the latest technology and best
intentions.
What if this
is similarly true for the balance of threat? What if this equilibrium is unstable?
Then, it could suddenly and unexpectedly break down. I would content that, in
fact, a global-scale war may start for two fundamentally different reasons.
Consider, as a simple analogue from physics, a metal plate that is pushed from
two opposite sides. In the first situation, if either of the two sides holding
the plate becomes stronger than the other, the metal plate will move. Hence,
the spheres of influence will shift. The second possibility is that both sides
are pushing equally strong, but they are pushing so much that the metal plate
suddenly bends and eventually breaks.
The current
news on the Ukrainian crisis do not make me confident that we are faced with a
stable equilibrium. We rather see the metal plate aching.
A push from
one side triggers a counter-push from the other side. One sanction is answered
by something else and vice versa. In this escalating chain of events, everyone
is pushing harder and harder without any chance for either side to gain
the upper hand. In the end, the metal plate may bend or break. In practical
terms, the nerves of a political leader or army general, for example, may not be
infinitely strong. Furthermore, not all events are under their control. Thus, under
enormous pressure, things might keep escalating and suddenly get out of
control, even if nobody wants this to happen, if everyone just wants to save face.
And this is still the most optimistic scenario, one in which all actors act
rationally, for which there is no guarantee.
In recent
years, evidence has accumulated that, in human history, many wars happened due
to either of the instabilities discussed above. Recent books about World
War I have revealed that it resulted from an eventual loss of control, which
was the outcome of a long chain of events – a domino effect that probably
resulted from the second kind of instability. Let us not make the same
mistake again[1] (see
Information Box below). Conflict in the Middle East has lasted for many decades, and
it taught us one thing: Winning every battle does not necessarily win a war. Similar
lessons had to be learned from the wars in Afghanistan and Iraq. My question is,
when do we finally start to change our thinking?
While
sanctioning may create social order in some cases, it may cause instability
and escalation in others. Punishing someone is only successful, if the punished
accepts the punishment, which often requires one to share the same values and
culture. If the punished doesn't accept the punishment and is strong enough, he
will strike back. Hence, a cycle of escalation will ensue, where each side
further drives the escalation while feeling to be right. In such a situation deterrence
is clearly not an effective solution. In the Ukrainian crisis, we have
seen that sanctions did not have the desired effect, and there is actually
no reason to believe that ever more sanctions would. In the case of Iran, for
example, sanctions took years to show a substantial effect. In fact, trying to
weaken a strong adversary may not be wise at all. It may even lead to
desperate efforts to overcome a threatening situation, and this by itself
may quickly lead to further escalation and, possibly, to war.
Therefore, in
a situation where we are faced with a potentially unstable “balance of threat”,
we would be well advised to consider other strategies. It is not worth risking
a World War III just for the sake of maintaining a “balance of threat”. Given
the finite probability that such a balance may become unstable, we must find
ways for both opponents to get out of the current situation without losing
face. In this context, it is good to remember that there are always bigger
challenges than what any side can solve by itself. A jointly faced threat, for
example, might unify opponents and justify for both sides to put their weapons
down. It really does not matter whether this threat is called “global warming”,
“global pandemics”, “global economic crisis”, “global energy crisis,” or
"global war" – we are faced with enough such challenges, which can
only be successfully addressed by a united global effort.
If we need a
war, than we need a war on wars. It might be true that, in history, war accelerated
cultural exchange and progress, but we must recognize that cultural diversity was
always the true driver of innovation and cultural evolution, not war. In
times of a multi-polar world with global conflicts, cyberthreats and nuclear weapon
arsenals on the one hand, but global exchange of people, goods and ideas on the
other hand, it is dangerous to consider war to be the mother of civilization –
it could rather be the end of it.
In the past
decades, we have made much progress in developing collaborative structures that
allow for diversity. Falling back to a thinking that stresses a “balance of
threat” rather than constructive, cooperative interactions is a very dangerous
step in the wrong direction. In fact, creating a new security architecture to
master (global) diversity without deadly conflict is a worthwhile challenge for
us all.
INFORMATION BOX: War as a result
of systemic instability
We must
realize that many large-scale conflicts, revolutions, and wars must be
interpreted as result of systemic instabilities. Interpreting them as deeds of
historical figures personalizes these phenomena in a way that distracts from
their true, systemic nature. It is important to recognize that complex systems
such as our society or economy usually resist attempts to change them, namely
when they are close to a stable equilibrium. This is also known as Goodhart's
law, principle of Le Chatelier, or the "illusion of control."
Individual factors and randomness can have a large impact on the path taken by
the system only, when a complex system is driven to a tipping point. In other
words, instability is a precondition for individuals to have a historical
impact. For example, World War II was preceded by a financial crisis and
recession, which had destabilized the economic, social, and political system.
This eventually made it possible that an individual could become influential
enough to drive the world to the edge.
Unfortunately,
civilization is vulnerable, and large-scale wars may happen again. A typical
evolutionary path towards war looks as follows: The resource situation
deteriorates (e.g. because of a serious economic crisis). The fierce
competition for insufficient resources lets violence, crime, and corruption
rise, while solidarity and tolerance go down, so that society is fragmented
into groups. This causes further dissatisfaction and social turmoil. People get
frustrated about the system, calling for leadership and order. Political extremism
emerges, scapegoats are searched, and minorities are suppressed.
Socio-diversity gets lost, and the well-balanced social ecosystem collapses,
such that the resource situation (the apparent "carrying capacity")
deteriorates further. This destabilizes the situation further, such that an
external enemy is needed for a stabilization of the country. As a consequence,
nationalism rises, and war may seem to be the only 'solution'.
[1] At least since 2010, I am worried that the
final outcome of the global financial and economic crisis might be political
instabilities, the rise of nationalism, and war. Let us stop this domino effect
before it is too late.
Labels:
Cascading,
Cold War,
Cyberthreats,
Dirk Helbing,
Domino Effect,
Global Crisis,
Networks,
Ukraine,
War,
World War 3
Tuesday, 11 March 2014
OVERCOMING "TRAGEDIES OF THE COMMONS" WITH A SELF-REGULATING, PARTICIPATORY MARKET SOCIETY
by Dirk Helbing
Our society is fundamentally changing. These days, almost nothing works without a computer chip. Processing power doubles every 18 months and will exceed the capabilities of human brains in about ten years from now. Some time ago, IBM's Big Blue computer already beat the best chess player. Meanwhile, computers perform about 70 percent of all financial transactions, and IBM's Watson advises customers better than human telephone hotlines. Will computers and robots soon replace skilled labour? In many European countries, unemployment is reaching historical heights. The forthcoming economic and social impact of future information and communication technologies (ICT) will be huge - probably more significant than that caused by the steam engine, or by nano- or biotechnology.
The storage capacity for data is growing even faster than computational capacity. Within just a year we will soon generate more data than in the entire history of humankind. The "Internet of Things" will network trillions of sensors. Unimaginable amounts of data will be collected. Big Data is already being praised as the “oil of the 21st century”. What opportunities and risks does this create for our society, economy, and environment?
From "homo economicus" to "homo socialis", the networked decision-maker
Let's start by analysing the situation today. Probably, the most widespread economic paradigm is that of "homo economicus", who merely tries to maximize personal benefits. It is often believed that such behaviour balances and coordinates the interests of individuals, as if controlled by an "invisible hand" and automatically maximizes social welfare.
If one believes in this neoclassical credo, then, economic problems arise mainly from the fact that there are too many regulations, or that some people do not adhere to the principle of self-regarding optimization. But why are there so many regulations, and why do many people have fairness preferences?
It was long believed that the merciless forces of evolution and natural selection could not have created man other than as a selfish being. However, recent scientific insights teach us something else. It has been demonstrated that the very same evolutionary forces that create "homo economicus" may also produce a different kind of people under very realistic circumstances: "homo socialis". "Homo socialis" tries to reach favourable outcomes as well, but considers the impact on others when taking decisions. As a consequence, "homo socialis" does not decide in an independent, but rather in an interdependent, "networked" fashion. This has surprising consequences: while "homo economicus" often runs into "tragedies of the commons", for example, the exploitation and pollution of the environment, overfishing and/or global warming, "homo socialis" can overcome such problems and reach a higher success by conditional cooperativeness.
Reputation systems to master social dilemmas
The above tragedies of the commons result from social dilemmas. These are situations, in which it would be good for everyone, if everybody behaved cooperatively, but where there is also a temptation to take advantage of the cooperativeness of others. Under such conditions, cooperation is likely to erode. To avoid this, it is common to establish regulations and enforce compliance with them by means of monitoring and punishment strategies. However, over time, the costs of such strategies have created enormous public debts, and in some cases de facto state bankruptcy.
But there are also alternatives. The root problem is that we have created an institutional framework for "homo economicus", for which cooperation in social dilemma situations cannot thrive. But it would also be possible to create institutions for "homo socialis"; i.e. institutions which provide a suitable framework to support self-regulation. With such institutions for "homo socialis", the principle of Adam Smith's "invisible hand", i.e. the favourable self-organization of a complex (market) system to the benefit of everyone would work much better than with institutions for "homo economicus".
How to envisage a self-regulating market system? The transfer of the principle of Swiss-style bottom-up democracy to the business world would probably be a good way to imagine this.
What would be suitable institutions for "homo socialis"? It is known that social dilemmas can be overcome by various social mechanisms, such as genetic favouritism, direct reciprocity ("you help me, I help you"), or punishment of uncooperative behaviour. Genetic favouritism tends to create ethnic conflicts between tribes, while direct reciprocity may promote corruption. The punishment of non-cooperative behaviour corresponds to our current approach, but this seems to have reached the limits of feasibility and affordability. Note, however, that there is a further approach, which transfers the success principle of social communities to the context of the "global village", namely reputation.
"Prosumers" and "qualified money"
Reputation systems in the internet spread very quickly. Nowadays, customers evaluate products and sellers, news, comments, politicians, institutions and companies. Reputation creates the opportunity to sell good quality for a higher price. Scientific studies of eBay and other electronic platforms show that customers prefer sellers who have good reputations, and that these sellers can charge more. When quality competition complements price competition, this can also create incentives to improve social and environmental production conditions, i.e. sustainability. Based on reputation principles, it would even be possible to establish a new kind of money, "qualified money" or "social money", which could overcome some of the problems of the current financial system.
The Information Age will transform markets fundamentally. In the following, I will outline just some aspects of the now emerging "democratic, participatory market societies." Flexible self-organization will play a much bigger role than today. The emergence of "Prosumers" illustrates this. These are consumers who participate in the production of the products they buy. Instead of just selecting existing products from a catalogue or choosing the special features of a personalized car, consumers will be able to create new components of products, new designs, or even entirely new products. For example, they could use a 3D printer to produce their own cell phone cover and distribute it to others. Or they could come up with their own fashion and upload it to a company webpage to produce it for them, their family, friends, and colleagues, or indeed customers all over the world. People could also distribute their own books, their own music and their own movies. Or they could put a team together to construct more sophisticated products.
An "innovation ecosystem" of flexible "projects"
While the 20th century was an era of democratization of consumption, the 21st century can become an era of democratization of production. Next to today’s companies, flexible, participatory forms of production will emerge, which I term "projects". Creative minds will come together to realize joint project ideas. After completing a project, everyone will be looking for another project or two, and so on. Social media platforms such as Amazon Mechanical Turk will make it possible to bring ideas and skilled workers together. As a consequence, this will lead to a more direct participation of people in production processes. There will also be a much greater diversity of products, tailored to individual needs. Thus, while computers will increasingly replace our current types of routine and executive work, we will have an opportunity to replace these jobs by more creative activities. Production by large corporations will then be complemented by an innovation ecosystem made up of thousands of projects. The huge range of smartphone apps, which platforms such as app stores have enabled, gives just a first idea of the unlimited possibilities for new projects. Open Data and the Web2.0, Web3.0, etc. will further accelerate this development.
However, Europe has not found its place in this new innovation universe, yet. Suitable institutions must first be established: the aforementioned reputation system is just one of them. Furthermore, open platforms are needed to enable participation and cooperation. In order to encourage an open exchange of information and the emergence of an innovation ecosystem, new incentive systems are required, which reward creative contributions. For this, the relevance of innovations must be made measureable, and inventors must be compensated for the use their ideas, e.g. with micropayments. Last but not least, we need a new science, which helps us to understand and create the participatory market society. While current economics ("economics 1.0") is tailored to "homo economicus", the emerging "economics 2.0" must be tailored to "homo socialis", the networked decision-maker. These and further institutions should be part of a far-reaching strategy to create an "innovation accelerator".
An age of creativity and participation is ahead of us. We just have to use the opportunities that modern information and communication technologies offer. Reputation systems and social media can promote awareness of the risks and benefits of our available decision alternatives. In particular, they can help us to address challenges such as global warming and other problems in a more cooperative and sustainable way.
References
Dirk Helbing, Economics 2.0: The natural step towards a self-regulating, participatory market society., Evolutionary and Institutional Economics Review 10(1), 3-41 (2013), see http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2267697; A new kind of economy is born – Social decision-makers beat the “homo economicus”, see http://arxiv.org/abs/1309.7453; for related videos search youtube.com for “economics 2.0” and the TEDx talk at http://www.youtube.com/watch?v=nsrRo9x0j80; also see "Countering climate change with climate olympics" at http://www.youtube.com/watch?v=TaRghSuzBYM.
Our society is fundamentally changing. These days, almost nothing works without a computer chip. Processing power doubles every 18 months and will exceed the capabilities of human brains in about ten years from now. Some time ago, IBM's Big Blue computer already beat the best chess player. Meanwhile, computers perform about 70 percent of all financial transactions, and IBM's Watson advises customers better than human telephone hotlines. Will computers and robots soon replace skilled labour? In many European countries, unemployment is reaching historical heights. The forthcoming economic and social impact of future information and communication technologies (ICT) will be huge - probably more significant than that caused by the steam engine, or by nano- or biotechnology.
The storage capacity for data is growing even faster than computational capacity. Within just a year we will soon generate more data than in the entire history of humankind. The "Internet of Things" will network trillions of sensors. Unimaginable amounts of data will be collected. Big Data is already being praised as the “oil of the 21st century”. What opportunities and risks does this create for our society, economy, and environment?
From "homo economicus" to "homo socialis", the networked decision-maker
Let's start by analysing the situation today. Probably, the most widespread economic paradigm is that of "homo economicus", who merely tries to maximize personal benefits. It is often believed that such behaviour balances and coordinates the interests of individuals, as if controlled by an "invisible hand" and automatically maximizes social welfare.
If one believes in this neoclassical credo, then, economic problems arise mainly from the fact that there are too many regulations, or that some people do not adhere to the principle of self-regarding optimization. But why are there so many regulations, and why do many people have fairness preferences?
It was long believed that the merciless forces of evolution and natural selection could not have created man other than as a selfish being. However, recent scientific insights teach us something else. It has been demonstrated that the very same evolutionary forces that create "homo economicus" may also produce a different kind of people under very realistic circumstances: "homo socialis". "Homo socialis" tries to reach favourable outcomes as well, but considers the impact on others when taking decisions. As a consequence, "homo socialis" does not decide in an independent, but rather in an interdependent, "networked" fashion. This has surprising consequences: while "homo economicus" often runs into "tragedies of the commons", for example, the exploitation and pollution of the environment, overfishing and/or global warming, "homo socialis" can overcome such problems and reach a higher success by conditional cooperativeness.
Reputation systems to master social dilemmas
The above tragedies of the commons result from social dilemmas. These are situations, in which it would be good for everyone, if everybody behaved cooperatively, but where there is also a temptation to take advantage of the cooperativeness of others. Under such conditions, cooperation is likely to erode. To avoid this, it is common to establish regulations and enforce compliance with them by means of monitoring and punishment strategies. However, over time, the costs of such strategies have created enormous public debts, and in some cases de facto state bankruptcy.
How to envisage a self-regulating market system? The transfer of the principle of Swiss-style bottom-up democracy to the business world would probably be a good way to imagine this.
What would be suitable institutions for "homo socialis"? It is known that social dilemmas can be overcome by various social mechanisms, such as genetic favouritism, direct reciprocity ("you help me, I help you"), or punishment of uncooperative behaviour. Genetic favouritism tends to create ethnic conflicts between tribes, while direct reciprocity may promote corruption. The punishment of non-cooperative behaviour corresponds to our current approach, but this seems to have reached the limits of feasibility and affordability. Note, however, that there is a further approach, which transfers the success principle of social communities to the context of the "global village", namely reputation.
"Prosumers" and "qualified money"
The Information Age will transform markets fundamentally. In the following, I will outline just some aspects of the now emerging "democratic, participatory market societies." Flexible self-organization will play a much bigger role than today. The emergence of "Prosumers" illustrates this. These are consumers who participate in the production of the products they buy. Instead of just selecting existing products from a catalogue or choosing the special features of a personalized car, consumers will be able to create new components of products, new designs, or even entirely new products. For example, they could use a 3D printer to produce their own cell phone cover and distribute it to others. Or they could come up with their own fashion and upload it to a company webpage to produce it for them, their family, friends, and colleagues, or indeed customers all over the world. People could also distribute their own books, their own music and their own movies. Or they could put a team together to construct more sophisticated products.
An "innovation ecosystem" of flexible "projects"
However, Europe has not found its place in this new innovation universe, yet. Suitable institutions must first be established: the aforementioned reputation system is just one of them. Furthermore, open platforms are needed to enable participation and cooperation. In order to encourage an open exchange of information and the emergence of an innovation ecosystem, new incentive systems are required, which reward creative contributions. For this, the relevance of innovations must be made measureable, and inventors must be compensated for the use their ideas, e.g. with micropayments. Last but not least, we need a new science, which helps us to understand and create the participatory market society. While current economics ("economics 1.0") is tailored to "homo economicus", the emerging "economics 2.0" must be tailored to "homo socialis", the networked decision-maker. These and further institutions should be part of a far-reaching strategy to create an "innovation accelerator".
An age of creativity and participation is ahead of us. We just have to use the opportunities that modern information and communication technologies offer. Reputation systems and social media can promote awareness of the risks and benefits of our available decision alternatives. In particular, they can help us to address challenges such as global warming and other problems in a more cooperative and sustainable way.
References
Labels:
Dirk Helbing,
Homo Economicus,
homo socialis,
prosumers,
reputation systems,
self regulating market,
Tragedy of the Commons,
Web 3.0
Wednesday, 11 December 2013
WITHOUT FREEDOM, WE ARE NO LONGER CITIZENS
Guest Post By Vincenzo Pavone [1] (IPP-CSIC) and Elvira Santiago (IPP-CSIC)
Big Brother 2.0?
No doubt, after the Snowden revelations and the recent
confrontation between Germany and the US, several citizens will be asking
themselves whether their private communications are under surveillance, and to
what extent. This very event has triggered intensive debate in the media and in
the political arenas of several European countries not only about the extent
and purpose of the surveillance programs, but also about one of the
technologies that are being used to arrange such surveillance: that is Big Data.
Big data is high-volume, high-velocity and high-variety information assets that
demand cost-effective, innovative forms of information processing for enhanced
insight and decision-making.[2]
It must be acknowledged that there are different ways of
using Big Data, and that the application of this set of technologies does not
necessarily need to be oriented towards the surveillance of individual
citizens. For instance, data can be anonymised, which allows research to be
conducted and data to be extracted and analyzed without preserving any link
between the data and the citizens to whom these data are related.
Is technology neutral?
However, this is not equal to saying that Big Data is a
neutral technology. Any technology, regardless of how many uses it may have, is
never neutral. Technologies, or rather sociotechnical practices, can never be
understood as stand alone pieces of human art crafts because they only work and
make sense in a network of socially constructed meanings, practices,
organizational protocols and tailor-made jargons. They come with their own
ethics, their own values. These values reflect the dominant political
priorities and ethical values of the societal stakeholders producing and using
such technologies. They may indeed change over time, but they will do so along
with the changes occurring in the society adopting, sustaining and implementing
such sociotechnical practices.
This is, in a few words, the basic assumption proposed by
what is known as a co-production approach in science and technology studies [3]:
science and social order are co-produced and they live in a mutually
constitutive relationship. Producing new scientific knowledge, as well as the
new technological tools stemming from such knowledge, produces new forms of
social order, and the opposite is also true: in order to produce new forms of
social order, new knowledge and technical tools are constantly fabricated.
Of cars and Big Data
An example, perhaps, may illustrate this better. If asked
about the cost of a specific car, we would normally answer by pointing at the
price of that car. But that is hardly the actual cost… or better that is the
cost only if seen from a specific point of view, which externalizes all the
real costs of a car and narrows the question down to the transaction between
the car dealer and the potential customer. However, cars, as a technology, only
make sense as part of a sophisticated network of sociotechnical practices that
needs to be constantly maintained to ensure that cars can fully operate across
a given space. Cars need roads, police, laws, speed cameras, hospital, doctors,
insurance companies, mechanics, gasoline pumps, etc. Without these
sociotechnical infrastructures and practices, a car is simply a meaningless,
useless box with five seats and four wheels. All these infrastructures have a
cost and we accept that most of these costs are to be paid collectively by the
citizens, often via the public system of tax collection. And why do we do so?
Because we believe that cars are a socially legitimate way to move around.
If that is true for cars, it is even more so for Big Data.
The latter, as a sociotechnical practice of security, only can be understood in
a society that understands security as a function of surveillance. This is why,
in so far as security is concerned, Big Data could never be anonymised as it
would not make sense to have millions of data proceeding from harmless
citizens’ communication without having name and surname (and much more) on it.
The question, thus, is not whether we are spied or not, but rather: how did we
come to pursue a concept of security, where many seem to believe that the
latter can only be increased through massive surveillance programs operated
through Big Data technologies?
A paradigm shift concept of security?
While in the 1990s, human security was associated with human
development, human rights and multilateralism, in the aftermath of the Twin
Towers attack it has evolved into a new, encompassing term that questions the
separation between internal and external security: religious fundamentalism,
ethnic conflicts and guerrilla-type wars are sources of threats that can well
come from inside the state borders [4]. As a result, internal and external
security agendas have eventually merged together [5]. Drug-trafficking, undocumented migration, and
economic crimes cease to be an issue of justice or social integration and,
overloaded with urgency and exceptionality, get subject to a new security
approach emphasizing threat anticipation.
In a regime of threat anticipation, risk assessment and risk
management become the cornerstone of a comprehensive approach that is geared to
constant detection and prevention of the threats and risks. In this new
approach, security is expanded well beyond the criminal domain in order to cope
with any sort of suspicious behavior, information or action that could
potentially constitute a threat. The resulting securitization of people’s
movements and actions cannot be confined to migrants: under the new concept of
security, controlling and integrating all sorts of information about ordinary
citizens is nothing but inexorable.
The constitutive role of security technologies
In this approach to security, surveillance-oriented security
technologies, and the analysis of Big Data is one of them, play a constitutive
role: they are part of a new social order. As it has become impossible to
conceive security without technology, we are permanently exposed to a
technological fix approach to the problem of security: the focus constantly
shifts from the search for a (complex) variety of causes and factors that has
produced the on-going transformation of security threats to (simple) series of
technological remedies that could be conceived, developed and implemented to
keep these challenges under control.
Inevitably, the successful deployment of new security
technologies under this new holistic concept of security comes at a cost: a
restriction of civil liberties and individual privacy. Security and liberty get
framed as two interchangeable goods that could be traded against each other:
any increase in security requires an equivalent contraction of civil liberties.
As the increase of security levels is intrinsically associated with an
ever-increasing implementation of surveillance technologies, it does not
consider the possibility of increasing security levels through either
non-surveillance-oriented technologies or through non-technological actions and
interventions.
Without freedom, we are no longer citizens
This is how we got to the point where millions of citizens
around the world are spied indiscriminately. However, once we have lost our
privacy, we can no longer act, meet, communicate, share or express ourselves
freely. Under surveillance, regardless of whether we have something to hide or
not, we cannot enjoy our basic civil and political rights. It is in this
context that we have to understand Big Data. They are key to the development
and implementation to a specific vision of what needs to be promoted as social
order. Needless to say, this specific view of a desirable social order is at
the same time promoting and fostering the development and implementation of Big
Data.
This is why developing such powerful technologies and then
hope that a few parliamentary bills will prevent their full implementation is
wishful thinking. Rather, we need to learn to conceive security in different
terms, as a shared responsibility and not only a function of repressive and
preventive surveillance. Social and economic factors such as social and
cultural integration, welfare supports, rule of law, fair redistribution of
resources and citizens’ participation are at least as important. We often hear
that without security, citizens cannot be free. Sure, this is true. However, without
freedom, no matter how safe, we are no longer citizens.
[1] Corresponding author: Vincenzo Pavone, Institute of Public
Goods and Policies (IPP), Consejo Superior Investigaciones Científicas, Madrid,
SPAIN. Vincenzo.pavone@csic.es
[2] Gartner Research. 2013. http://www.gartner.com/it-glossary/big-data/
[3] Jasanoff, S. 2004. States of knowledge: the co-production of
science and social order: Psychology Press
[4] Lutterbeck, D. (2005) "Blurring the dividing line: The
convergence of internal and external security in Western Europe," European
Security 14(2): 231-53.
[5] Bigo, D. (2000) "Internal and external securitisations
in Europe," International Relations Theory and European Integration:
Power, Security and Community: 154.
Tuesday, 15 October 2013
SCIENTISTS MUST SPEARHEAD ETHICAL USE OF BIG DATA
Guest post from ALBERT-LÁSZLÓ BARABÁSI
The recent revelation that the
National Security Agency collects the personal data of United States citizens,
allies and enemies alike has broken the traditional model governing the bond
between science and society.
Most breakthrough technologies
have dual uses. Think of atomic energy and the nuclear bomb or genetic engineering
and biological weapons. This tension never gives way. Our only hope to
overcoming it is to stop all research.
But that is unrealistic. Instead,
the model we scientists follow is simple: We need to be transparent about the
potential use and misuse of our trade. We publish our results, making them
accessible to everyone. And when we do see the potential for abuse, we speak
up, urging society to reach a consensus on how to keep the good but outlaw the
bad.
As the NSA secretly developed its
unparalleled surveillance program, relying on a mixture of tools rooted in
computer and social sciences, this model failed. Scientists whose work fueled
these advances failed to forcefully articulate the collateral dangers their
tools pose. And a political leadership, intoxicated by the power of these
tools, failed to keep their use within the strict limits of the Constitution.
It’s easy to see why this
happened. After all, the benefits of Big Data and the science behind it are
hard to overlook. Beyond the many digital applications that make our life
increasingly easy today, data science holds promise for emergency response and
for stopping the next virus from turning into a deadly pandemic. It also holds
the key to our personal health, since our activity patterns and disease history
are more predictive of our future disease than our genes.
For researchers involved in basic
science, like myself, Big Data is the Holy Grail: It promises to unearth the
mathematical laws that govern society at large. Motivated by this challenge, my
lab has spent much of the past decade studying the activity patterns of
millions of mobile phone consumers, relying on call patterns provided by mobile
phone companies. This data was identical to what NSA muscled away from
providers, except that ours was anonymized, processed to help research without
harming the participants. In a series of research papers published in the
journals Science and Nature, my team confirmed the promise of Big Data by
quantifying the predictability of our daily patterns, the threat digital
viruses pose to mobile phones and even the reaction people have when a bomb
goes off beside them.
We also learned that when it
comes to our behavior, we can’t use only two scales — one for good and the
other for bad. Rather, our activity patterns are remarkably diverse: For any
act labeled “unusual” or “anomalous,” such as calling people at odd hours or
visiting sensitive locations outside our predictable daily routine, we will
find millions of individuals who do just that as part of their normal routine.
Hence identifying terrorist intent is more difficult than finding a needle in a
haystack — it’s more like spotting a particular blade of hay.
Let’s face it: Powered by the
right type of Big Data, data mining is a weapon. It can be just as harmful,
with long-term toxicity, as an atomic bomb. It poisons trust, straining
everything from human relations to political alliances and free trade. It may
target combatants, but it cannot succeed without sifting through billions of
data points scraped from innocent civilians. And when it is a weapon, it should
be treated like a weapon.
To repair the damage already
done, we researchers, with a keen understanding of the promise and the limits
of our trade, must work for a world that uses science in an ethical manner. We
can look at the three pillars of nuclear nonproliferation as a model for going
forward.
The good news is that the first
pillar, the act of nonproliferation itself, is less pertinent in this context:
Many of the technologies behind NSA’s spying are already in the public domain,
a legacy of the openness of the scientific enterprise. Yet the other two
pillars, disarmament and peaceful use, are just as important here as they were
for nuclear disarmament. We must inspect and limit the use of this new science
for military purposes and, to restore trust, we must promote the peaceful use
of these technologies.
We can achieve this only in
alliance with the society at large, together amending universal human rights
with the right to data ownership and the right of safe passage.
Data ownership states that the
data pertaining to my activity, like my browsing pattern, shopping habits or
reading history, belongs to me, and only I control its use. Safe passage is the
expectation that the information I choose to transfer will reach its intended
beneficiaries without being tapped by countless electronic ears along the way.
The NSA, by indiscriminately tapping all communication pipelines, has degraded
both principles.
Science can counteract spying
overreach by developing tools and technologies that, by design, lock in these
principles. A good example of such a design is the Internet itself, built to be
an open system to which anyone could connect without vetting by a central
authority. It took decades for governments around the world to learn to censor
its openness.
This summer, while visiting my
hometown in Transylvania, I had the opportunity to talk with a neighbor who
spent years as a political prisoner. Once freed, for decades to come, he knew
that everything he uttered was listened to and recorded. He received
transcripts of his own communications after the fall of communism. They spanned
seven volumes. It was toxic and dehumanizing, a way of life that America has
repeatedly denounced and fought against.
So why are we beginning to spread
communism 2.0 around the world, a quarter-century after the Iron Curtain’s
collapse? This is effectively what NSA surveillance has become. If we
scientists stay silent, we all risk becoming digitally enslaved.
Posted with permission.
Posted with permission.
Albert-László Barabási is a physicist and network scientist at Northeastern University and Harvard Medical School, and the author of “Bursts: The Hidden Patterns Behind Everything We Do.”
Subscribe to:
Comments (Atom)
