Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, 29 January 2019

A Platform for Informational Self-Determination

Informational self-determination should is a human right - it follows from human dignity. In times of Big Data and AI, we have lost self-determination in the digital and real world little by little. This must be changed as soon as possible.
 

Below is a slide on a proposed platform for informational self-determination, which would give control over our digital doubles back to us. With this, all personalized services and products would be possible, but companies would have to convince us to share some of our data with them for a specific purpose. The resulting competition for consumer trust would eventually promote a trustable digital society.  

The platform would also create a level playing field: not only big business, but also SMEs, spinoffs, NGOs, scientific institutions and civil society could work with the data treasure, if they would get data access approved by the people (but many people may actually select this as a default). Overall, such a platform for informational self-determination would promote a thriving information ecosystem.

Government agencies and scientific institutions would be allowed to run statistics. A benevolent super-intelligent system that helps good things to succeed while not interfering with our free will would also be possible. Such a system should be designed for values such as human dignity, sustainability, fairness, as well as further constitutional and cultural values that support the evolvement of creativity and human potential with societal and global benefits in mind.

Data management would be done by means of a personalised AI system running on our own devices, i.e. digital assistants that learn our privacy preferences and the companies and institutions we trust or don’t trust. Our digital assistants would comfortably preconfigure personal data access, and we could always adapt it.

Over time, if implemented well, such an approach could establish a thriving, trustable digital age that empowers people, companies and governments alike, while making quick progress towards a sustainable and peaceful world.

Further reading: 

https://www.morgenpost.de/web-wissen/web-technik/article213868509/Facebook-Skandal-Experte-raet-zu-digitalem-Datenassistenten.htm

https://www.japantimes.co.jp/opinion/2018/04/30/commentary/world-commentary/stop-surveillance-capitalism/ 

http://futurict.blogspot.com/2018/04/nudging-tool-of-choice-to-steer.html

Sunday, 8 July 2018

On the Use of Big Data and AI for Health

Pitfalls of Big Data Analytics

High-precision medicine requires reliable decisions whom to treat best in what way, when and with what dose of what medicine, ideally even before a disease breaks out. This challenge, however, can only be met with large amounts of personal and/or group-specific data, which may be extremely sensitive, as such data may be used against the interest of the patients (e.g. in the interest of profit maximization). Consequently, there are plenty of technical, scientific, ethical and political challenges.

This situation makes it particularly important to protect personal data from misuse by means of cybersecurity, to ensure a professional use of the data, and to implement suitable measures to achieve a maximum level of human dignity (including informational self-determination).

In the past, empirical and experimental analyses have often been suffering from lack of data or small amounts of data. In many areas, including medical studies, this has changed, or is about to change. Big Data is, therefore, promising to overcome some common limitations of previous medical treatments, which were often not personalized, imprecise, ineffective and connected with many side effects.

In the early days of Big Data, people expected to have found a general purpose tool, something like a holy grail. It was believed that, if one had just enough data, data quantity would turn into data quality; the truth would basically reveal itself. This idea is probably best expressed by a quote by Chris Anderson, who – back in 2008 – predicted “the end of theory” and wrote in the Wired Magazine: “The data deluge makes the scientific method obsolete.”

Along these lines it was claimed that it would now be possible to predict, or at least to “nowcast” the flu from Google searches, as reflected by the platform Google Flu Trends. The company 23andMe offered to identify ethnic origin, phenotype, and likely diseases. Angelina Jolie said “knowledge is power” and had her breasts removed, because her genetic test identified a high chance she would get breast cancer.

Later on, Google Flu Trends was closed down, doctors warned that Angelina Jolie should not be taken as example, and 23andMe’s genetic test was temporally taken off the market by the health authority. How could this happen? Google searches were not anymore a reliable measurement instrument, as Google had started to manipulate people with suggestions (both through the autocomplete function and by means of personalized advertisement). Regarding attempts to predict diseases by means of genetic data, it was discovered that some people were doing very well, even though they were predicted to be very ill. Moreover, predictions were sometimes quite sensitive to adding or subtracting data points, to the choice of the Big Data algorithm, or (in some cases) even to the hardware used for the analysis.

Generally, it was thought that – the more data one would have the more accurate the implications of data analyses would be. However, the analyses often took correlations for causation, and they were not checking for statistical significance – in many cases, it was not even clear what the appropriate null hypothesis was. So, in many cases, Big Data analytics was initially not compatible with established statistical and medical standards.

In fact, the more data one has, the higher the probability to find patterns in the data just by chance. These patterns will often be not meaningful or significant. Spurious correlations are a well-known example for this problem. These are correlations that do not reflect a causal relationship, or where a third factor causes two effects to correlate, where neither effect influences the other. In such cases, increasing or decreasing the measured variables would not have the expected effect. It could even be counterproductive. Careful causality analysis (by concepts such as Granger causality) are, therefore, absolutely required.

Another problem concerns undesirable discrimination. Suppose a health insurance wants to incentivize certain kinds of “healthy” diets – by reducing tariffs for people who eat more salad and less meat, for example. As a side effect, it would then be likely that men will pay different tariffs from women, and Christians, Jews, and Muslims would on average pay different tariffs as well, just because of their different religious and cultural traditions. Such effects are considered discriminatory and need to be avoided. If one, furthermore, wants to avoid discrimination based on age, sexual orientation and other features that should not be discriminated against, Big Data analytics becomes a quite sophisticated challenge.

Last but not least, even Big Data analytics will produce errors of first kind and of second kind, i.e. false alarms and alarms that don’t go off. This is a problem for many medical tests. Say, a medical test costs x and a correct diagnosis creates a benefit of y, while a wrong one will cause a damage of z. Moreover, assume that that the test is correct with probability p and incorrect with probability (1-p). Then, the overall utility of the test is u = – x + p*y – (1-p)*z, which might be neutral or even negative, depending on the impact of wrong diagnoses. For example, false negatives are an issue for many kinds of cancer, and it is therefore sometimes advised, not to test the entire population.

In conclusion, the scientific method is absolutely indispensable to make sense of Big Data, i.e. to refine raw data into reliable information and useful knowledge. Hence, Big Data is not the end of theory, but rather the beginning.

A good example to illustrate this is the example of flu prediction. When the spatio-temporal spreading of the flu is studied, one will often find a wide scattering of the data and a low predictive power. This is related to the fact that the spreading of the flu is related to air travel. However, it is possible to use data of the passenger volumes of air travel to define an effective distance between cities, where cities with high mutual passenger flows are located next to each other. In this effective distance representation, the spreading pattern becomes circular and predictable. This approach makes it possible to identify the likely city in which a new disease emerged and to forecast the likely order in which cities will be suffering from the flu. Hence, it is possible to take proactive measures to fight the disease more effectively.

Pitfalls of Machine Learning and Artificial Intelligence

With the rise of machine learning methods, new hopes emerged that the previously mentioned problems could be overcome with Artificial Intelligence (AI). The expectation was that, AI systems would sooner or later become superintelligent and capable of performing any task better than humans, at least any specialized task.

In fact, AI systems are now capable of performing many diagnoses more reliably than doctors, e.g. diagnoses of certain kinds of cancer. Such applications can certainly be of tremendous use.

However, AI systems will make errors, too, just perhaps with lower frequency. So, decisions or suggestions of AI systems must be critically questioned, particularly when a decision may have large-scale impact, i.e. when a single mistake can potentially create large damage. This is necessary also because of a serious weakness of most of today’s AI systems: they do not explain how they come to their conclusions. For example, they do not tell us what is the likelihood that the suggestion is based on a spurious correlation. In fact, if AI systems turn correlations into laws (as cybernetic control systems or autonomous systems may do), this could eliminate important freedoms of decision-making.

Last but not least, it has been found that not only humans, but also AI systems can be manipulated. Moreover, intelligent machines are not necessarily objective and fair: they may discriminate people. For example, it has been shown that people of color and women are potential victims of such discrimination, in part because AI systems are typically trained with biased, historical data. So, machine bias is a frequent, undesired side effect and it is a serious risk of machine learning, which must be tested for and properly counter-acted.

Tuesday, 13 February 2018

The Birth of a Digital God

By Dirk Helbing (ETH Zurich/TU Delft/Complexity Science Hub Vienna)

It is finally happening! At the annual meeting of the Swiss Civil Society Association on November 11, Professor Hans Ulrich Gumbrecht gave a memorable speech – a “mass,” as some listeners thought. It was not just about trying to create a super-intelligent system with consciousness. No, the goal was now to create a God-like being with superhuman knowledge and abilities to guide our human destiny. However, there is the risk that this God might turn against humanity, he continued, even though it was man-made. The statement that this should free us from Biblical sin was even more surprising.

Gumbrecht is not the first one to raise the subject of Artificial Intelligence (AI) as God. Just recently, the Guardian, under the title ”Deus Ex Machina,” announced that ex-Google collaborator Levandowski wanted to register Artificial Intelligence as religion.[1] Shortly later, Google announced its latest triumph. They had succeeded in building an AI system that learned to win the strategy game “Go” by itself – so well in fact that it could beat the world champion. At the same time, it was suggested that one had now found an approach that would sooner or later solve all the problems of humanity, including those that surpassed our intellectual capacities.

Just a few days later, Spiegel Online wrote: ”God does not need any teachers.”[2] Already in 2013, I discussed the opportunities and risks of the information age in an article entitled “Google as God?”[3] Furthermore, in 2015, the Digital Manifesto asked: “Let us suppose there was a super-intelligent machine with God-like knowledge and superhuman abilities: would we follow its instructions?”[4]

Some readers found the question ridiculous at that time. Not anymore! Because search engines and intelligence services know almost everything about us. We have been living in a Big Brother world already for some time. George Orwell's dystopian novel “1984,” written in 1948, was meant as a warning. But more and more often we get the feeling the bestseller was actually used as an instruction manual.

Today’s data-driven world has two main principles: “Data is the new oil” and “Knowledge is power.” Little by little, and almost unnoticed, this has created a fundamentally new society. There is a new currency, “data,” which replaces classical money. There is a new economic system: the “attention economy,” where our attention is sold by auctions in split seconds. In addition, the companies of “surveillance capitalism” are measuring our behavior, our personality and our lives in ever more detail. In times of free services, we have become a product ourselves. Last but not least, the principle “code is law” has established a new legal system, which bypasses our parliament.

Are we in danger of losing our liberties, human rights and participation step by step, almost imperceptibly? Are we giving up on things that are important to us, just because we fear terrorism, climate change, and cybercrime? Are self-determined citizens in a danger to be turned into remotely controlled subjects?

In fact, this isn’t just fantasy! China is already testing a Citizen Score,[5] i.e. every citizen is rated, has a certain number of points. Minus points will punish those who do not pay for their loan immediately, cross the street during a red light, have the “wrong” friends or neighbors, or reads critical news. The Citizen Score then determines the job opportunities, loan conditions, access to services, and mobility restrictions. Great Britain seems to go even a step further. It measures its citizens including the videos they watch and the music they hear. The system is called “Karma Police.”[6] So, will it punish thought crimes, you may ask? Or is “Karma Police” a kind of “Judgment Day” waiting to come down on us any time?

Do we have to accept this? Computers make better decisions, it is often said. In fact, computers have been the better chess players for years. In many areas they are better workers. They don’t get tired, do not complain, do not go on vacation, and do not have to pay taxes and contributions to social security. Soon they will be better drivers. They diagnose cancer better than physicians and answer questions better than people – at least those that have already an answer.

When will robots become our judge and hangman? When will they start to “fix the overpopulation problem”? (Autonomous killer robots with face recognition probably exist already or could at least exist soon – see the recent movies on slaughterbots and robot swarms.[7]) When will robots replace us? Not just our work… A newspaper article recently suggested that the descendants of humans will be machines.[8] In other words, humanity will be replaced by robots. Is this really our human destiny? Should we build a future for robots or for humans? Isn’t it time to wake up from the transhumanist dream?[9]

Back to the initial question: Is Google creating a digital God? With its Loon project, the company at least tries to be omnipresent. With its search engine, language assistants and measurement sensors in our rooms, Google wants to be omniscient. While the company is not yet omnipotent, it is at least answering 95 percent of our questions, and with personalized information, Google is increasingly steering our thinking and actions. Furthermore, the Calico project is also trying to make people immortal. Therefore, in an overpopulated world, would Google be the judge over life and death?

Whatever, someone recently suggested an AI God would soon write a new Bible.[10] So would he (or she) set the rules we would have to live by? Do we soon have to worship an AI algorithm and submit ourselves to it? No question, some already seem to dream of a digital God who will guide our human destiny. What for some is the invention of God through human ingenuity, however, must be the ultimate blasphemy for Christians – in some sense the rise of the Antichrist.

Whatever one may think about all this, the phrase “knowledge is power” has certainly blown some people’s minds. Google, IBM and Facebook are said to be working on a new operating system for society.[11] Democracy is defamed as outdated technology.[12] They want to engineer paradise on Earth – a smarter planet where everything will be automated. So far, however, the plan did not really work out.[13] The world’s cities with the highest quality of life are located everywhere, but in the leading IT nations. And even in the Silicon Valley, the heart of the digital revolution, and other IT hotspots, experts start to worry…

Elon Musk, for example, fears that Artificial Intelligence could become the greatest threat to humanity. Even Bill Gates had to admit that he was in the camp of those who were worried about superintelligence. The famous physicist Stephen Hawking warned that humans would not be able to compete with the development of Artificial Intelligence. Apple co-founder Steve Wozniak agreed: “Computers are going to take over from humans, no question,” he said, but: “Will we be the gods? Will be the family pets? Or will be ants that get stepped on? I don’t know…”[14] Jürgen Schmidhuber, German AI pioneer, believes to know – from a robot’s perspective, we will be like cats.[15]

Of course, the worry that technology could turn against us is already old. Besides George Orwell’s “1984” and “Animal Farm,” Aldous Huxley’s “Brave New World” warned us of the danger of rising totalitarianism. Suddenly people also remember “The Machine Stops” by Edward Morgan Forster in 1909 (!). More recent books are Dave Egger’s “The Circle,” “Homo Deus” by Yuval Noah Harari and Joel Cachelin’s “Internet God.” If you like science fiction, you might love “QualityLand” by Marc-Uwe Kling or “iGod” by Willemijn Dicke.

A question, which not only science fiction lovers should ask, is: What future do we want to live in? Never before have we had a better chance to build a world of our liking. But for this we have to take the future into our hands. It’s high time to overcome our self-imposed digital immaturity. To free ourselves from the digital shackles, digital literacy and enlightenment are needed. So far, we are living in a market-conform democracy, where the markets are driven by technology. Instead, we should build an economy that serves to reach the goals of people and society. Technology should be a means of achieving this. This requires a fundamental redesign of our monetary, financial and economic system based on the principle of value-sensitive design. In “The Globalist,” I have recently outlined how this could be done.[16] Maybe you have your own ideas of how to use Big Data and Artificial Intelligence. But in any case, a better future is possible! Let’s demand this better future! Let’s co-create it! What are we waiting for?


[1] https://www.theguardian.com/technology/2017/sep/28/artificial-intelligence-god-anthony-levandowski
[2] http://www.spiegel.de/wissenschaft/technik/kuenstliche-intelligenz-gott-braucht-keine-lehrmeister-kolumne-a-1175130.html
[3] https://www.nzz.ch/google-als-gott-1.18049950
[4] http://www.spektrum.de/thema/das-digital-manifest/1375924, English translation: https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/
[5] https://www.economist.com/news/briefing/21711902-worrying-implications-its-social-credit-project-china-invents-digital-totalitarian
[6] https://theintercept.com/2015/09/25/gchq-radio-porn-spies-track-web-users-online-identities/
[7] https://www.youtube.com/watch?v=9CO6M2HsoIA, https://www.youtube.com/watch?v=CGAk5gRD-t0
[8] https://www.nzz.ch/feuilleton/unsere-nachfahren-werden-maschinen-sein-ld.1322780
[9] https://www.nzz.ch/meinung/kommentare/die-gefaehrliche-utopie-der-selbstoptimierung-wider-den-transhumanismus-ld.1301315, http://privacysurgeon.org/blog/wp-content/uploads/2017/07/Human-manifesto_26_short-1.pdf
[10] https://venturebeat.com/2017/10/02/an-ai-god-will-emerge-by-2042-and-write-its-own-bible-will-you-worship-it/
[11] http://www.faz.net/aktuell/feuilleton/medien/google-gruendet-in-den-usa-government-innovaton-lab-13852715.html, https://www.pcworld.com/article/3031137/forget-trump-and-clinton-ibms-watson-is-running-for-president.html, https://www.theguardian.com/technology/2017/feb/17/facebook-ceo-mark-zuckerberg-rule-world-president, http://theconversation.com/if-facebook-ruled-the-world-mark-zuckerbergs-vision-of-a-digital-future-73459
12] Hencken, Randolph. 2014. In: Mikrogesellschaften. Hat die Demokratie ausgedient? Capriccio. Video, veröffentlicht am 15.5.2014. Autor: Joachim Gaertner. München: Bayerischer Rundfunk.
[13] https://www.wiltonpark.org.uk/wp-content/uploads/WP1449-Report.pdf
[14] https://www.computerworld.com/article/2901679/steve-wozniak-on-ai-will-we-be-pets-or-mere-ants-to-be-squashed-our-robot-overlords.html
[15] http://www.faz.net/aktuell/feuilleton/debatten/ueberwindung-des-menschen-durch-selbstlernende-maschinen-15309705.html
[16] https://www.theglobalist.com/author/dirk-helbing/