popAI is an EU-funded project that aims at fostering a constructive dialogue between the European policymakers, the Law Enforcement Agencies (LEAs), and the ordinary citizens. ETAPAS and popAI have a lot in common and recently started to collaborate. In particular, we were interested by popAI perspective about ethical AI adoption in the Law enforcement sector, so we decided to interview the project coordinator Dimitris Kyriazanos to ask him a few questions. Dimitris Kyriazanos is an Associate Researcher and the Deputy Director (Civil Security) of the Integrated Systems Laboratory group of the Institute of Informatics and Telecommunications of the National Centre for Scientific Research “Demokritos” (NCSRD). Below you can read his inspiring answers to our questions, enjoy your reading!
One of popAI project’s objectives is to facilitate advances in implementation of human-centred, socially-driven ethical- and secure by design AI in support of Law Enforcement. What is the planned course of action to achieve this ambitious goal?
PopAI started by investigating the theoretical background and mapping the AI functionalities in use within the civil security context across helpful classification categories such as application areas, maturity, data sources and AI algorithmic technique.
This is linked with the ecosystem mapping activities and stakeholders across law enforcement, industry, research and civil society. This taxonomy serves as the basis for the reference framework to which popAI activities are linked, namely the empirical knowledge collection, the controversies mapping, the legal, societal and ethical aspects) including organizational and human factors aspects. Through cross-disciplinary research and analysis of these findings popAI will reach the objective of facilitating advances though the provision of foresight analysis and recommendations for and from: policymakers and LEAs, Civil Society and Industry. In particular, Industry includes technology providers, AI services and product designers, and security workers.
It is worth mentioning that popAI supports evidence-based and anticipatory policy making, adopting a pro-active, future driven approach that takes into account the innovation principle. This will facilitate the provision of recommendations that could take into consideration the policy perspectives, thus reducing the gap between the policy level and the real implication on society
Which tools are most likely to prove useful?
To begin with, one could argue that usefulness can be measured over multiple parameters and that needs further elaboration, ranging from the technical and scientific performance (accuracy, false negatives/positives rate and other), to End User acceptance, usability and societal acceptance aspects. Concerning which kind of tools, we could see deployed in the near or mid-term, it is a bit early in the project timeline to answer to this question. Furthermore, the discussion needs also the insights from Cluster projects that focus more on the research road-mapping like ALIGNER.
From our first popAI workshop it emerged that AI is naturally appreciated in cases of supporting human operators especially in repetitive tasks and analysis of large amounts of data. Predictive analytics and predictive policing offered by AI are also appreciated by End Users in the context of prevention and efficient resource allocation, although this kind of applications can be among the most controversial ones and in need of careful technical ethical design and regulatory consideration.
Recently, there has been much discussion and attempt to provide common approaches for the ethical assessment of AI, especially within the public sector. In this respect, do you think the law enforcement sector deserves a special approach? What should be the peculiarities of an AI impact assessment designed for this sector?
Law enforcement sector and the Civil Security context comes with specific requirements in terms of awareness, social engagement, inclusiveness, trust, safeguarding fundamental rights and privacy, even the perception of security and the sense of justice and freedom within the EU. In this aspect, the Civil Security context needs indeed more in-depth analysis of the specific theoretical and empirical knowledge within the identified controversies. In terms of “treatment” per se, Law Enforcement is still part of the public sector. We do not envision a special treatment or patch to be specifically made for law enforcement, instead policies and recommendations are provided through a holistic approach so that the entire ecosystem will integrate AI tools and operate in an ethical, legal and socially acceptable way “by design”.
PopAI is a forward-looking project, in its objectives to imagine how AI in security will develop over the next twenty years and to identify the trends, perspectives, practices and challenges of using AI in this sector. What are the main challenges you already envision for this task?
The main challenges can also be seen as opportunities – as long as you can succeed in overcoming them. This includes enhancing AI related skills (i.e., technical, ethical AI, etc.) for Member States law enforcement, enhancing the capacity building activities of trusted AI tools for LEAs via adequate financial support and increasing awareness creations mechanisms for civil society aiming to increase trust in AI for the security domain.
What do you think is the perception of citizens when they are told that AI applications will be used in law enforcement? Do you think it is important to involve citizens in assessing the ethical and social implications of these applications?
Not just important, it is necessary and one of the key enablers for a successful AI application and deployment as these are directly connected to policy implications and impact. One of the core objectives of popAI is to engage civil society and raise awareness to the general public for a positive sum approach for the use of AI in the security domain and safeguarding of fundamental rights. There are already many ongoing controversies and debates within the EU, going up to the level of the European Parliament which reflect the perception of citizens on potential -or current- uses of AI within law enforcement. Concerns are also connected to the identified as high-risk AI systems within the annex of EU AI Act. I do not wish to speak on behalf of civil society at this point. The civil society and citizens’ perceptions and views are part of the empirical knowledge collection activities of popAI and we will provide public reports with relevant findings and assessment within the next months.
To conclude, given that both projects relate to the ethical adoption of technologies in the public sector, what kind of mutually beneficial interaction do you envisage between ETAPAS and popAI?
I am glad to say that following a first joint meeting we have already identified between our two projects a clear area of collaboration with practical added value for both projects, starting from participation in forthcoming events: ETAPAS RDT framework workshop and the popAI stakeholders meeting. The interaction, besides the clustering benefits, will follow up in the next months with meaningful collaboration in terms of exchanging findings of common interest, research complementarity, policy impact and implications and the production of a joint policy brief.
This fruitful conversation with Dimitris Kyriazanos highlighted the importance of advancing the implementation of human-centred, socially-driven, ethical and secure by design AI in support of Law Enforcement. It also emerged from the interview that there is a great synergy potential between ETAPAS and popAI projects as well as the willingness to keep collaborating for the advancement of human-centred and ethical adoption of AI technologies in the public sector.