This article was written by Markus Schmitz, CIO at the Bundesagentur für Arbeit (Federal Employment Agency, Germany) and expert in digitalization and the future of work, and is the third article of a series that discusses the challenges and ethical questions around Artificial Intelligence and Data Ethics.
This is the third article in a series on the impact that artificial intelligence (AI) and simple analytics have on society and how we can approach these topics ethically. The first two articles introduced the risks and benefits of AI and discussed ethical principles and recommendations published by international institutions and tech companies.
This article is different. In September, my team and I had the privilege of meeting Roger Taylor, Chair of the British Centre for Data Ethics and Innovation (CDEI), and learning about the British perspective on the challenges that come with AI and data ethics. We discussed how public-sector institutions can create principles and an organizational setup to ensure the ethical use of AI and considered how various ideas from the UK could work in Germany. I’ll share the insights of our conversation below.
There is tremendous potential for AI applications to help public organizations provide better services for their customers. For instance, analytics can help labor agencies identify the risk of long-term unemployment early and the most effective measures to reduce this risk significantly. However, it is essential to ensure that AI applications are deployed ethically.
The key objective of the CDEI is to establish principles and structures that “enable regulators to regulate” the design and deployment of AI applications. It advises the British government on the appropriate processes for the design and rollout of algorithms, focusing on the key aspects of algorithm biases and microtargeting. It postulates that the ability of data-driven systems to understand a bias in more detail creates an obligation to do so. The CDEI has done extensive work on AI and policing, targeting to protect vulnerable people (e.g., teenagers with anxiety or mental health issues), as well as reducing misinformation in elections. Roger shared how a series of public discussion groups in the UK showed that there was strong support for “protective targeting” measures, if applied ethically, e.g., identifying teenagers at risk for suicide based on their social media activity.
To help public organizations ensure ethical deployment of AI, we identified three key questions:
As the application of AI in government organizations is still growing and the use cases are highly dynamic, there are obviously no standard principles for its ethical use. Generally, such principles can be defined operationally or on a very high level. The biggest question is how the benefit of deploying an algorithm compares to its potential invasiveness. The answer to this depends on the concrete use case. Involving the public in this debate as the CDEI did in its series of discussion groups will help identify key questions and concerns, leading to more informed decisions.
One way to provide better government services with AI is to make AI directly available to customers, not only inside government agencies. For instance, job seekers can voluntarily use a tool to help them identify the most effective qualification measures and potential job opportunities.
The general question here is who can ensure that an algorithm meets ethical standards. We agreed that an independent body outside the organization deploying the algorithm is more likely to exclude biases through preconceptions than a body within the organization. This independent body should test the algorithm for potential biases, decide whether it can be deployed or must be modified, and create transparency on how the algorithm works. A completely external organization could struggle to clearly understand the relevant aspects in depth, so this independent body could be composed of members of the supervisory board of a government agency and an independent research institute.
For example, the supervisory boards of German government agencies are typically composed of trade union representatives, members of the employers’ organizations, and government officials from different federal levels. Together with an independent research institute, representatives from these three groups could form an independent data ethics board for the government agency to test certain algorithms, decide their deployment, and create transparency on how they work.
We also discussed which data set an algorithm should be assessed with. When determining the effectiveness of an algorithm, comprehensive data sets yield more meaningful results than restricted data sets. However, if an algorithm is assessed by the organization deploying it, e.g., an individual police force, a comprehensive data set would mean unlimited access to a national police data set. This issue could again be mitigated by establishing an independent body that can be given access to comprehensive data sets.
Roger also shared that a test sample for all kinds of algorithms based on financial services is being developed at the University of Edinburgh. It will make use of all banking transactions and credit references for a sample of the population and can be applied in different circumstances.
My team and I found this exchange very inspiring and highly relevant for all government agencies, as they can not only improve their services using AI, but they have an ethical obligation to do so.
Stay tuned for the next article of this series!
The next article of this series will discuss how to put into practice what we have learned. I will also make suggestions how to set up a public sector agency to use AI and data ethically.
This article was originally published by Markus Schmitz on Linkedin.
These Stories on CIONET International
No Comments Yet
Let us know what you think