This article was written by Markus Schmitz, CIO at the Bundesagentur für Arbeit (Federal Employment Agency, Germany) and expert in digitalization and the future of work, and is the third article of a series that discusses the challenges and ethical questions around Artificial Intelligence and Data Ethics.
We have all seen machine-generated recommendations for movies we would like to watch, or products that might be useful to us. We more or less follow our navigation system to circumvent traffic jams. Some of us speak to virtual assistants that were unthinkable just until five years ago, and most of us will have heard about autonomous cars having driven over 20,000 km on public roads. All this is due to artificial intelligence (AI).
How can we setup a public sector agency to capture the benefits of AI without suffering from the risks?
This is the fourth article in a series on the impact that AI and simple analytics have on society and how we can approach these topics ethically. So far, we have introduced the risks and benefits of AI and discussed ethical principles and recommendations published by international institutions, academia, and tech companies. Most recently, we shared our insights from a discussion with Roger Taylor, Chair of the British Centre for Data Ethics and Innovation (CDEI). In this article, we will build on the insights from the previous articles and describe how to enable a public-sector agency to handle the ethical challenges of AI.
Studying a series of AI ethics principles including the EU, OECD, Asilomar, and Floridi, we identified three key challenges:
The recent report of the Datenethik-Kommission makes no exception here. Based on our discussion with Roger Taylor we will now make a suggestion on how to set the foundation for an AI ethics-first approach in a public-sector agency in five steps.
Any organization working on or with AI systems should start documenting its intentions and drafting a specific ethics code of conduct on AI in line with the company’s policies and mission. Hereby, the existing drafts on AI ethics can be used as a starting point. A comprehensive set of principles for AI ethics extending the largely agreed principles for bioethics comprises beneficence, nonmaleficence, autonomy, justice, and explicability.
We discussed in previous articles that the principles are inherently in conflict and thus there cannot be an automated way to evaluate and decide on the possible trade-offs for the deployment of an AI solution. Such trade-offs come with any technological solution.
A first step for any organization would therefore be to lay out its AI ethics code of conduct, taking into account its specific domain and challenges. Building on existing drafts from the EU and others will ensure that many views are already inherited. Involving the public in this debate, as the CDEI did, will help identify key questions and concerns, leading to more informed decisions. However, an AI ethics code of conduct will not be sufficient in itself. It requires an active body that ensures that people comply with it, constantly working on resolving the inherent conflicts case by case with the society as the beneficiary.
Any deployment of an AI solution must ultimately be inspected by people so that they can make a conscious and transparent decision regarding the trade-offs of a particular AI technology. Together with Roger Taylor, we discussed setting up a council in charge of testing the algorithm for potential biases, deciding whether it can be deployed or must be modified, and creating transparency on how the algorithm works, to name a few. Such a council has to meet four critical requirements:
The council should ideally include representatives of all relevant stakeholders with a broad background, such as legal experts, technical experts, ethicists, employer and consumer representatives, and workers. For the Bundesagentur für Arbeit, a potential composition of this council could consist of members of its supervisory board – such as representatives of the trade unions, the employers’ organizations and representatives of the federal government, the Laender and municipalities - and the IAB as an independent research institute. That way, internal knowledge from the public-sector agency is available while the council itself remains independent. An external and independent research institute could provide the in-depth knowledge required to thoroughly test the algorithms regarding explicability requirements. As shown in Figure 1, there should be a representation of the agency itself in the council as well as a representation of the agency’s client: the citizen.
The council shouldn't just evaluate finalized approaches as this would likely lead to many ideas getting rejected followed by a lot of frustration on all sides. Instead, the council should co-create the approaches and steer the development so that the result is “ethical by design.” Another responsibility of the council would be to periodically reevaluate deployed AI solutions. It could also serve as the highest instance citizens could appeal to regarding recommendations or actions of the algorithms.
The first task of a council on AI ethics would be to elaborate the initial AI ethics code of conduct, tailor it to the organization and specify the ethical position of the council and therefore of the organization. Broadly speaking, an organization is limited by legal and technical constraints, with ethics being an additional factor. As Figure 2 illustrates, the ethical position includes the option for the council to recommend actions outside of the organization’s interest, which clearly underlines the independence of the council.
Such a code of conduct cannot be static at this point. Rather, a periodic review should be aspired. Regardless of its iterative nature, the initial code of conduct should clearly show the intentions the organization has and the boundaries it vows to respect. Publishing the resulting code of conduct ensures transparency and trust from society, including the possibility of criticism.
While discussions on conflicting principles are the main reason for creating a council, they should not only be guided by the concise but abstract code of conduct. Instead, tools are required that support its implementation and help handle the complex questions around ethics in AI in a standardized way. Such tools can (and should) also be used as guidance by the organization when developing AI solutions.
The most elaborate, but industry-agnostic set of recommendations at the time of writing are the seven requirements from the Ethics Guidelines for Trustworthy AI from the EU High-Level Expert Group on AI. While fully aware of the criticism, we will use these as a starting point.
The checklist provided by the EU HLEG AI or the independent DataEthics institute can be adapted to the agency’s needs and complemented by an AI ethics canvas such as the Data Ethics Canvas from the Open Data Institute as shown in Figure 3 or the Ethics Canvas from the ADAPT Centre for Digital Content Technology.
First, the board needs to make a clear commitment to the AI ethics code of conduct and ensure that the whole organization is aware of these guidelines. However, ultimately all employees should be able to work with the code of conduct, allowing them not only to provide critical judgement, but also to identify opportunities on how to employ AI ethically to the benefit of our citizens.
Communication, education, and training play an important role, both to ensure that knowledge of the potential impact of AI systems is widespread, and to make people aware that they can participate in shaping the development .This includes all stakeholders from the designers and developers making the products over the users to any other groups which may be impacted indirectly by an AI system. It is critical that, as AI systems perform more tasks on their own, the teams that design, develop, test and maintain, deploy and procure these systems reflect the diversity of users and of society in general.
The German AI strategy (in German) states that AI should be introduced in the public sector to improve efficiency, quality, and security in the administration. Additionally, the recent report from the Datenethik-Kommission highlights the potential of AI in the public administration. Thus, it is not so much a question of whether AI should be used in the public sector, but how and where.
To ensure an ethical use of AI, each organization requires tailored AI ethics guidelines. To ensure that they are applied, an independent, diverse, and knowledgeable AI ethics council seems to be the best path forward.
I'm aware that we will learn as we go, thus the first cases should be of low risk and allow us to develop a better understanding and iteratively refine the process itself.
Last but not least, I believe the discussion on AI ethics should not be left to the for-profit industry. To actively lead the discussion, make informed decisions on how to make use of AI, and stick to our high ethical standards, the public sector has to be actively involved.
This article was originally published by Markus Schmitz on Linkedin.
These Stories on CIONET International
No Comments Yet
Let us know what you think