EZZEDDINE, Yasmine and BAYERL, Petra (2024). “Should everyone have access to AI? " Perspectives on Ownership of AI tools for Security. In: GONÇALVES, Carlos and ROUCO, José Carlos Dias, (eds.) Proceedings of the International Conference on AI Research, ICAIR 2024. Reading, Academic Conferences International Ltd, 448-455. [Book Section]
Documents
34828:832767
PDF (Version query - RRS applies)
Ezzeddine-ShouldEveryoneHaveAccessToAI(AM).pdf
Restricted to Repository staff only
Available under License All rights reserved.
Ezzeddine-ShouldEveryoneHaveAccessToAI(AM).pdf
Restricted to Repository staff only
Available under License All rights reserved.
Download (355kB)
Abstract
Given the widespread concerns about the integration of Artificial Intelligence (AI) tools into security and law enforcement, it is natural for digital governance to strive for greater inclusivity in both practice and design (Chohan and Hu, 2020). This inclusivity can manifest in several ways, such as advocating for legal frameworks and algorithmic governance (Schuilenburg and Peeters, 2020), allowing individuals choice, and addressing unintended consequences in extensive data management (Peeters and Widlak, 2018). An under-reflected aspect is the question of ownership, i.e., who should be able to possess and deploy AI tools for law enforcement purposes. Our interview findings from 111 participants across seven countries identified five citizens viewpoints with respect to AI ownership of security-related AI: (1) Police and police-governed agencies; (2) Citizens who disassociate themselves; (3) Entities other than the police; (4) All citizens including themselves; and (5) No one or Unsure. The five clusters represent disparate perspectives on who should be responsible for AI technologies, as well as related concerns about data ownership and expertise, and thus link into broader discussions on responsibility for security, i.e., what deserves protection, how and by whom. The findings contribute theoretically to digitalization, smart technology, social inclusion, and security studies. Additionally, it seeks to influence policy by advocating for AI development that addresses citizen concerns, thereby mitigating risks, social, and ethical implications associated with AI. Crucially, it aims to highlight citizens’ concerns around the potential for malicious actors to exploit ownership of such powerful technology for harmful purposes.
More Information
Metrics
Altmetric Badge
Dimensions Badge
Share
Actions (login required)
![]() |
View Item |