Join us
Login

Artificial Intelligence and Data Ethics I

Published by Maria Filipe
October 29, 2019 @ 1:00 PM

This article was written by Markus Schmitz, CIO at the Bundesagentur für Arbeit (Federal Employment Agency, Germany) and expert in digitalization and the future of work, and is the first article of a series that discusses the challenges and ethical questions around Artificial Intelligence and Data Ethics.


From pure data protection to the ethical use of data

If just a fraction of the current news about digitalization, automation and artificial intelligence (AI) is true, the next few years, maybe even decades, will present a roller coaster of changes in society. According to several scientific studies [1][4][5], these changes will include a significant reduction of repetitive work and simple decision-making tasks from our daily routine, leading to an increased demand for creative or social skills. This has the potential to make work and life more rewarding but will certainly change many jobs as we know them today.

Earlier this year, together with colleagues from IT leadership, I met with Roger Taylor, Chair at the Centre for Data Ethics and Innovation in the United Kingdom. We discussed his work on data ethics, which I believe is critical if we want to put the technological potential of AI to good use in society.

Inspired by this meeting, I would like to share a short series of posts, in which I discuss the impact of automation and AI on society and how we should respond, especially from an ethical standpoint.

 

AI technology is gaining ground rapidly

Artificial intelligence is the science and engineering of building intelligent machines – that is, computer systems that can perform tasks that normally require humans. An example is the recent ability of computers to identify faces from photos and videos with accuracy equal to or better than what a human can do.

While the term “artificial intelligence” originated in 1956 at Dartmouth College as the brainchild of a dozen professors, it has only recently become broadly applicable due to the convergence of three key elements: data, computing power, and advanced algorithms.

Although this technology grew just out of academic research, it is gaining ground rapidly, continuously developing and improving, and breaking records in what was until recently thought to be impossible [2].

Unlike many other innovations, AI is a general-purpose technology, since it can be integrated into nearly every sector of the economy – as with the steam engine, electricity, and the Internet. It already is the change driver in mobility (autonomous cars being the most prominent), warehouse operations (e.g., online retailers’ fulfillment centers), communication (e.g., chatbots), and many other areas. Given this extremely broad applicability, AI is expected to unlock about USD 13 trillion annually by 2030 – and we are only at a very early stage.

 

AI makes predictions cheaper

Artificial intelligence can have this huge impact since – in the words of Agrawal et al. [3] – AI eventually makes predictions cheaper. As illustrated in Figure 1 predictions are at the heart (but not the only element) of making decisions under uncertainty. Being able to “afford” more predictions will allow for making more decisions, thus increasing the need for the remaining elements, such as judgement and action, leading to more demand for humans when it comes to these tasks.

CIONET Opinions - Markus Schmitz - Artificial Intelligence and Data Ethics I - Anatomy of a task

Figure 1 – Anatomy of a task [3]

Furthermore, we can use the illustration to identify where ethical challenges can arise and be mitigated within the decision-making process. Inappropriate input or training data can, for example, lead to biased or unfair predictions. In addition to technical approaches for mitigating data risks, judgement and action provide further opportunities for mitigation if properly executed.

 

There’s a lot we can gain by using AI. Is there anything to lose?

As an example for how AI can change our work and lives, we might consider the situation of automating the approval for a loan. We can observe two distinct aspects reflecting the effects of implementing an automated system:

  1. Replacement: The employee who would normally discuss the credit situation with the customer has been replaced by a machine
  2. (Automated) decisions: A machine evaluates the credit rating of the bank customer

Replacement is a typical effect of introducing new technologies such as washing machines or the assembly line. The effect on the potential loss of jobs and the subsequent up- and reskilling consequently needed is being extensively discussed, which is good and necessary. What’s surprising is that while research shows that several tasks traditionally performed by humans will be taken over by AI Systems [2], it doesn’t show a net loss of jobs. Instead, it shows a shift toward social, emotional, and higher-cognitive skills, such as creativity, critical thinking, and complex information processing [4][5]

When decisions are being automated, we also need to discuss how all of us as citizens are affected by the application of AI beyond jobs. Especially when personally identifiable information (PII) is used and decisions affecting people are made or greatly influenced by such AI systems, ethical concerns such as bias and fairness arise.

Automatic face detection, for example, has been applied at public events in China and the United States to identify wanted criminals within the audience. This begs the question: Would you be in favor of using this kind of technology to enable state authorities to identify and arrest suspects more quickly with the promise of greater public safety or do you feel this oversteps the boundaries of personal privacy?

Or consider the idea of knowing potential health risks early on. Would you provide your health data to an institution (state-owned or private) that could compare it to millions of other cases and advise you very accurately and quickly about potential health risks based on your input? Or would you prefer to not provide this data, but rather speak confidentially with your doctors and nurses, relying only on their recollection of notable and most recent cases in warning you of possible risks or problems?

Imagine going into retirement and having to complete stacks of forms, reiterating what you already provided over the years in your tax returns. Wouldn’t it be great if the tax authorities and retirement funds agencies worked together to share data and you only had to address a few simple open points?

 

New technical capabilities create a privacy vs. benefit dilemma, especially regarding the use of PII

It is commonly agreed upon that AI has a lot of potential, but what exactly is the goal? And even with a clear goal in mind, there is still a discussion about if and how an application could be implemented, considering the ethical principles involved.

Artificial intelligence is hailed as a general-purpose technology and a solution to many of the world’s existing challenges. What we know is that AI is here to stay. What needs to be further discussed is where do we want to use AI and when we do, how do we want to use it?

The dilemma between the promises and perils of using data and AI is a core, non-technical topic at this stage in the emergence of AI, which is still in its infancy. This is the first in a series of posts in which I will discuss the challenges and ethical questions around this topic.


Stay tuned for the second article of this series!

The second article of this series will focus on the ethical principles on AI and the usage of data which have been published by many institutions, such as the EU, the OECD, and the Future of Life Institute as well as some larger tech companies.


This article was originally published by Markus Schmitz on Linkedin.


References:

[1] Zika et. Al, “Digitalisierung bringt große Umwälzungen am Arbeitsmarkt” (in German), IAB, September 2018. https://www.iab.de/751/section.aspx/1549

[2] “The AI Index 2018 Annual Report”, AI Index Steering Committee, Human-Centered AI Initiative, Stanford University, Stanford, CA, December 2018. http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf

[3] Agrawal, Gans, Goldfarb, “Prediction Machines: The Simple Economics of Artificial Intelligence”, Harvard Business Review Press, 2018

[4] Frey and Osborne, “The Future of Employment: How susceptible are jobs to computerisation?”, September 2013 https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

[5] “The Future of Jobs”, World Economic Forum, January 2016. https://www.weforum.org/reports/the-future-of-jobs

 

You May Also Like

These Stories on CIONET International

Subscribe by Email

No Comments Yet

Let us know what you think