This article was written by Markus Schmitz, CIO at the Bundesagentur für Arbeit (Federal Employment Agency, Germany) and expert in digitalization and the future of work, and is the second article of a series that discusses the challenges and ethical questions around Artificial Intelligence and Data Ethics.
Artificial intelligence (AI) and its benefits are here to stay. But the question remains whether certain AI applications can be ethically implemented.
This is the second article in a series on the impact that AI and simple analytics have on society and how we can approach these topics ethically. While the first article introduced the risks and benefits of AI, this article outlines ethical principles and recommendations published by international institutions and tech companies.
The ethical use of AI is becoming a more prominent topic in research and the tech industry, which has already experienced a few unfortunate incidents. Google’s and Amazon’s face-recognition technologies, for example, were unable to identify “minorities” that, in this case, included anyone who was not male or who had a slightly darker shade of skin. Other high-profile examples of misuse include the Facebook–Cambridge Analytica scandal or Microsoft’s Chatbot Tay, which ran without human oversight and started using racist speech within 24 hours, leading to 2 shutdowns within 10 days.
It’s clear that actionable ethical guidelines for AI use and deployment are needed so we can benefit from AI while avoiding the risks it poses.
Various groups have attempted to formulate principles to guide AI use, such as the EU [1] and OECD [2], but not without criticism [7]. The EU draft details 7 principles and 24 recommendations and is complemented by various activities from industry as shown in Figure 1.
Apple, Amazon, DeepMind, Google, Facebook, IBM, and Microsoft founded the Partnership on AI, a nonprofit organization with more than 90 partners that studies “ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology.” Only Google [4], IBM [5], and Microsoft [6] have published their ethical principles concerning AI, even though in some cases they are merely a shortlist of terms. Additionally, various academic groups have published AI principles, such as the Asilomar AI principles from the Future of Life Institute, led by key academics in the field of AI [3]. With SAP and Telekom only two companies from the German Stock Index (DAX 30) have similar statements regarding AI Ethics at the moment of writing [12].
Analyzing the AI ethics principles of governmental bodies, a private company, and an academic group can yield insights into if and how these groups approach AI ethics differently.
In this series, we will focus on ethical challenges surrounding “narrow AI” or “weak AI” solutions – AI systems focused on one specific, defined task. We intentionally excluded the discussion on general AI or artificial superintelligence. However, we acknowledge that this is an important discussion and as there is no consensus on what the upper limits of future AI capabilities will be, we should avoid making any strong assumptions.
To compare the documents, we used the five key principles identified by Floridi et al. [11] that aim to prevent negative consequences of overusing or misusing AI technologies. As Director of the Digital Ethics Lab of the Oxford Internet Institute, Floridi’s work has influenced many ethics principles. Figure 2 shows the focus of five organizations regarding Floridi’s five key principles.
China focuses less on autonomy and explicability while Google highlights the non-maleficence of its AI applications in-line with its original code of conduct “Don’t be evil.”. The draft published by the EU and the Asilomar AI principles cover Floridi’s five key principles to a similar extend. However, the EU version is far more detailed.
Initial observations are that
It might be no surprise that all principle sets agree that AI should be used for social good or the benefit of humanity, but what this entails is never clearly defined.
As seen above, the Beijing AI Principles are broadly in line with the other four, although rather brief and vague. Explicability and human autonomy are mentioned, but they are much more detailed in the principles published by the EU and the Future of Life Institute.
Another observation is that the EU and OECD recommend how things should be done now and in the near future. The comparatively lengthy draft published by the EU even contains a six-page checklist. Additionally, the EU principles are the only ones that contain the requirement to report on the negative impact of AI systems – similar to how European banks must report their risks to a central regulatory body – which certainly enforces transparency and would likely support adoption.
The Asilomar principles take a more long-term view and include elements that may not yet be applicable, such as the requirement to align on AI’s goals and behaviors with human values. They are also unique in that they extend privacy as the right of every person to access, manage, and control the data they generate while others limit privacy to the safety of data from malicious access.
Google’s principles, in contrast to the others, almost all begin with “We will …” which is – reasonably – absent from all other drafts, since Google is the only of the five building AI systems and hence, capable of realizing commitment. The company’s statement is also the only example outlining what AI applications they will not pursue.
Despite the differences in the drafts, three common challenges emerge: the principles will often be in conflict, are highly general and require tailoring in order to be actionable and have no enforcement mechanism.
As stated in the EU draft, tensions between the principles surface when analyzing specific AI applications. Benefitting society, for instance, is likely to conflict with individual privacy. Beneficence and autonomy would conflict with predictive policing, which would prevent crime with surveillance but impact individual liberty and privacy. Beneficence and explicability already conflict with existing systems that diagnose medical conditions more accurately than doctors, potentially saving lives but without providing a “full and satisfactory” explanation.
Given that AI is still new, we may be able to find ways of satisfying more of the principles in conflict without having to look for trade-offs. Other than that, the only way to avoid underuse of AI is to approach such ethical questions with trade-offs on a case-by-case basis. However, different situations generate different challenges: AI-driven music recommendations do not raise the same ethical concerns as a system proposing critical medical treatments. Nevertheless, some of the principles cannot be compromised on, such as those concerning human dignity.
How can decisions be made based on an inherently conflicting set of principles?
As the principles are inherently in conflict, a single checklist will not lead to a clear answer for specific cases. However, they provide a good basis for more formal commitments to help navigate competing demands and considerations. For (sub)principles such as accountability, explicability, fairness, or privacy, technical approaches are becoming available, including toolkits like IBM’s AI Fairness 360 (AIF360) or Facebook’s Fairness Flow [10]. However, successful implementation of ethical AI will also require nontechnical methods like regulation, education, and certification.
While not domain specific, a pilot version of an assessment list for trustworthy AI is provided by the EU, along with potential methods to ensure trustworthy AI. The list is a generic, non-exhaustive questionnaire but is the most extensive assessment currently available. As such, it forms one possible basis for more specific action-guiding principles and how to apply them in specific situations and balance them when they conflict. However, “Beyond developing a set of rules, ensuring trustworthy AI requires us to build and maintain an ethical culture and mindset through public debate, education, and practical learning” (Ethics Guidelines for Trustworthy AI; EU AI HLEG 2019).
How can existing, but general recommendations be tailored to industries or sectors?
None of the ethics principles are legally binding. Self-committed ethics guidelines can give the impression that a legal framework is not required, even though there is no transparency on how the industry follows its self-imposed guidelines. We usually hear about ethics boards – if they exist at all – when they are installed or disassembled, but nothing about their inner workings [8]. Pairing this with the strong economic interests in applying AI, companies handling ethics less serious are likely to have an economic advantage. This effect may also be fueled by the framing of AI research and deployment as an aspect of fierce global competition rather than collaboration. In the reviewed set, the EU’s requirement on “Minimization and reporting of negative impacts” – which at least requires protecting whistle-blowers, NGOs and other entities – is the only approach.
How can the application of (potentially self-given) ethical principles be ensured?
AI has huge potential, but also risks. One of the risks is that of not using AI and hence, missing out on its many opportunities. The two other main risks are its overuse or misuse. Being able to handle these risks requires balancing principles. Complementing purely technical solutions, introducing actionable guidelines to navigate competing demands, and forming ethics boards that follow a well-defined set of principles would help establish ethical AI solutions. Resolving trade-offs will also require extensive public engagement to give voice to a wide range of stakeholders and their interests.
Ultimately, ethical principles will require some form of enforcement, otherwise ethics might just “… play the role of a bicycle brake on an intercontinental airplane”, as stated by the late Germany sociologist Ulrich Beck already back in 1988.
Stay tuned for the third article of this series!
The third article of this series will discuss why these principles are relevant for the public sector and outline potential first steps to create an operational framework that harnesses the benefits of AI while adequately controlling the risks.
This article was originally published by Markus Schmitz on Linkedin.
References:
[1] EU Requirements of Trustworthy AI - https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1
[2] OECD Principles on Artificial Intelligence - https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm
[3] Asilomar AI Principles - https://futureoflife.org/ai-principles/
[4] Artificial Intelligence at Google: Our Principles - https://ai.google/principles/
[5] IBM Trusted AI for Business - https://www.ibm.com/watson/ai-ethics/
[6] Microsoft AI Principles - https://www.microsoft.com/en-us/ai/our-approach-to-ai
[7] Ethics Washing Made in Europe - https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html
[8] An External Advisory Council to Help Advance the Responsible Development of AI - https://www.blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/
[9] Beijing AI Principles - https://www.baai.ac.cn/blog/beijing-ai-principles
[10] Facebook and the Technical University of Munich Announce New Independent TUM Institute for Ethics in Artificial Intelligence -https://newsroom.fb.com/news/2019/01/tum-institute-for-ethics-in-ai/
[11] AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations – December 2018 – Luciano Floridi et al. https://link.springer.com/article/10.1007/s11023-018-9482-5
[12] AI Ethics Guidelines Global Inventory – AlgorithmWatch https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/