SHERPA is an EU-funded project which analyses how AI and big data analytics impact ethics and human rights. ETAPAS and SHERPA have a lot in common and recently started to collaborate. In particular, we were fascinated by the SHERPA project recommendations, so we decided to interview the project coordinator, Bernd Stahl and ask him a few questions about some of the recommendations and also a public sector use case very relevant for the ETAPAS project objectives. Bern Stahl is Professor of Critical Research in Technology and Director of the Centre for Computing and Social Responsibility at De Montfort University, please find below his answers who will provide you with valuable insights into the realm of ethical technology adoption.
- Referring to your recommendation number 2 “Develop baseline model for AI impact assessments”, what are the main limitations that are making it difficult to have a shared approach for AI impact assessment? Do you think these limitations might be particularly relevant for the Public Sector? Based on your experience, does it change much between AI and other technologies in terms of impact assessment?
There are various challenges that one encounters when undertaking an impact assessment for artificial intelligence. Many of these have to do with conceptual questions, such as the question what counts as artificial intelligence or how do we define an impact. The definition of AI is clearly important, as it determines the scope of the assessment. Is the aim to look at consequences of a particular AI technique, is the focus on a socio-technical system or at the world at large and the consequences that the introduction of an AI approach may have? At the same time, the impacts of any action and intervention are potentially infinite, which raises the question of where to draw the boundary of an impact assessment. The SHERPA recommendation aims to clarify these questions and provide a baseline that individuals and organisations may use when undertaking impact assessments.
These questions are doubtlessly important for the public sector and probably different from private sector concerns. The public sector arguably has a greater duty of care with regards to possible consequences of activities and therefore will likely consider a broader range of possible consequences during impact assessment. This supports the SHERPA suggestion that there is a need for a clearer understanding of the options for such impact assessments, so that public sector organisations understand the choices they make when undertaking such assessments.
- In your recommendation number 4 “Create training and education pathways that include ethics and human rights in AI” you make reference to technology-oriented curricula, do you think that also specific educational paths on the ethics of technology should be provided for workers in the public sector and policymakers?
Short answer: yes. A reflected use of AI technologies and existing and emerging digital technologies more broadly requires a minimum understanding of these technologies. This includes a scientific and technical understanding as well as an appreciation of their social, ethical and human rights implications. This does not mean that everybody has to become a programmer, but everybody has to have a reasonable level of digital literacy. This clearly includes the public sector and such educational activities should be included in the training curriculum of public sector workers as well.
- With reference to the case study on the DrukteRadar Project in Amsterdam, what are the main lessons learned from the real cases with respect to “theoretical research”? What strategies would allow public administrations to take the identified ethical implications into account when implementing emerging technologies?
I will respond to the lessons concerning empirical and practical research here and refer to the details of the Amsterdam case in the next response. SHERPA undertook 10 case studies in different contexts to gain an understanding of the ethical and human rights concerns that organisations encounter and how they address these. This work informs all subsequent insights and the recommendations of the SHERPA project. We found that references to AI / big data / smart information systems point to a great variety of technologies that are implemented in very different ways at different scales for different purposes. Speaking of “AI”, as if this were a unified easily identified technology is therefore misleading. We therefore suggested that a better way of considering these questions is to think about them in terms of innovation ecosystems where many technical drivers interact with existing and developing socio-technical systems. This complexity explains the breadth of empirical observations and it also provides the foundation for our recommendations that point to the fact that an intelligent mix of interventions is needed, if the ethics of AI is to be addressed appropriately.
- With reference to the case study on the DrukteRadar Project in Amsterdam, what are the main lessons learned? What strategies would allow public administrations to take the identified ethical implications into account when implementing emerging technologies?
The Amsterdam case study demonstrated the high level of awareness by this particular local authority that they have a duty to ensure that the technology they use is fit for purpose and acceptable by citizens. Local municipalities often have good links with their citizens and they have the local understanding required to identify areas where novel technologies can help and solve problems. In the Amsterdam case (https://www.project-sherpa.eu/885-2/) shows, probably not surprisingly, that a large and complex project like the DrukteRadar project that we investigate raises numerous concerns. The project makes use of a range of datasource to anticipate and prevent overcrowding in Amsterdam. This is important for a positive experience of tourists as well as the quality of life of the local population. In our case study we found that there are numerous potential ethical issues, such as lack of access to data, accuracy and availability of data, ownership, possible technological lock-in, privacy and security. Amsterdam works extensively with stakeholders to find ways of addressing these issues.
While this case, like any case, is to a certain extent unique, it does show that a proactive public administration can benefit from the access to diverse data sources to find AI-related solutions to pressing problems. Early and proactive engagement with stakeholders seems to be a key to a successful implementation of such projects. This is even more the case due to the political and democratic nature of European public bodies. The SHERPA research shows that a proactive approach can help public administration to benefit from technical options. Ethical questions will arise, but these can be addressed and can be seen as a motivator for broad engagement with citizens.