Study Highlights Complicated Relationship Between AI and Law Enforcement
A recent study that examined the relationship between artificial intelligence (AI) and law enforcement underscores both the need for law enforcement agencies to be involved in the development of public policies regarding AI – such as regulations governing autonomous vehicles – and the need for law enforcement officers to better understand the limitations and ethical challenges of AI technologies.
“Law enforcement agencies have a crucial role to play in implementing public policies related to AI technologies,” says Veljko Dubljević, corresponding author of the study and an associate professor of science, technology and society at North Carolina State University.
“For example, officers will need to know how to proceed if they pull over a vehicle being driven autonomously for a traffic violation. For that matter, they will need to know how to pull over a vehicle being driven autonomously. Because of their role in maintaining public order, it’s important for law enforcement to have a seat at the table in crafting these policies.”
“In addition, there are a number of AI-powered technologies that are already in use by law enforcement agencies that are designed to help them prevent and respond to crime,” says Ronald Dempsey, first author of the study and a former graduate student at NC State. “These range from facial recognition technologies to technologies designed to detect gunshots and notify relevant law enforcement agencies.
“However, our study suggests that many officers do not understand how these technologies work, which makes it difficult or impossible for them to appreciate the limitations and ethical risks of those technologies. And that can pose significant problems for both law enforcement and the public.”
For this study, the researchers conducted in-depth interviews with 20 law enforcement professionals who work in North Carolina. The interviews addressed a range of issues, including the values and qualities that the study participants felt were critical for law enforcement officers.
While there was no consensus across a majority of study participants, there were several characteristics that cropped up repeatedly as important qualities for a law enforcement professional, with integrity, honesty and empathy being cited most often.
“Understanding what law enforcement deems to be desirable characteristics in officers is valuable, because these characteristics can inform the development of responsible design guidelines for AI technologies that law enforcement will use,” Dempsey says.
“Design guidelines can be used to inform AI decision-making, and it is easier for end users to work with AI tools if the values guiding AI decisions are consistent – or at least not in conflict – with the values of the end users,” says Dubljević.
The researchers also asked study participants about their views on AI in general, as well as existing and emerging AI technologies.
“We found that study participants were not familiar with AI, or with the limitations of AI technologies,” says Jim Brunet, co-author of the study and director of NC State’s Public Safety Leadership Initiative. “This included AI technologies that participants had used on the job, such as facial recognition and gunshot detection technologies. However, study participants expressed support for these tools, which they felt were valuable for law enforcement.”
The study participants also expressed concern about the future of autonomous vehicles, and what challenges they may pose to the law enforcement community.
“However, study participants did say that they would welcome public use of autonomous vehicles if that would reduce car accidents,” says Dubljević. “Specifically, the participants welcomed the idea of spending less time responding to vehicle accidents, which would allow them to focus on addressing crime.”
“There are always dangers when law enforcement adopts technologies that were not developed with law enforcement in mind,” says Brunet. “This certainly applies to AI technologies such as facial recognition. As a result, it’s critical for law enforcement officials to have some training in the ethical dimensions surrounding the use of these AI technologies. For example, where a law enforcement agency chooses to deploy AI tools will affect which portions of the public are subject to additional scrutiny.”
“It’s also important to understand that AI tools are not foolproof,” says Dubljević. “AI is subject to limitations. And if law enforcement officials don’t understand those limitations, they may place more value on the AI than is warranted – which can pose ethical challenges in itself.”
A paper on the study, “Exploring and Understanding Law Enforcement’s Relationship with Technology: A Qualitative Interview Study of Police Officers in North Carolina,” is published in the open access journal Applied Sciences.
-shipman-
Note to Editors: The study abstract follows.
“Exploring and Understanding Law Enforcement’s Relationship with Technology: A Qualitative Interview Study of Police Officers in North Carolina”
Authors: Ronald P. Dempsey, James R. Brunet and Veljko Dubljević, North Carolina State University
Published: March 18, Applied Sciences
DOI: 10.3390/app13063887
Abstract: Integrating artificial intelligence (AI) technologies into law enforcement has become a concern of contemporary politics and public discourse. In this paper, we qualitatively examine the perspectives of AI technologies based on 20 semi-structured interviews of law enforcement professionals in North Carolina. We investigate how integrating AI technologies, such as predictive policing and autonomous vehicle (AV) technology, impacts the relationships between communities and police jurisdictions. The evidence suggests that police officers maintain that AI plays a limited role in policing but believe the technologies will continue to expand, improving public safety and increasing policing capability. Conversely, police officers believe that AI will not necessarily increase trust between police and the community, citing ethical concerns and the potential to infringe on civil rights. It is thus argued that the trends toward integrating AI technologies into law enforcement are not without risk. Policymaking guided by public consensus and collaborative discussion with law enforcement professionals must aim to promote accountability through the application of responsible design of AI in policing with an end state of providing societal benefits and mitigating harm to the populace. Society has a moral obligation to mitigate the detrimental consequences of fully integrating AI technologies into law enforcement.
This post was originally published in NC State News.