Utah Bill Aims to Make Officers Disclose AI-Written Police Reports

AI Analysis

A bill in Utah's Senate would require law enforcement agencies to disclose if a police report was written with generative AI. While this step is a necessary measure, it may not be enough to curb potential harms. Generative AI can struggle with language nuances, potentially leading to biased or inaccurate reports. Moreover, using AI to generate reports can obscure human agency and create plausible deniability for officers. Without robust regulations and transparency, the spread of such technology could complicate cases and compromise justice. Efforts to regulate this use are crucial to maintaining public trust in the criminal justice system.

Key Points

  • Regulation of Generative AI in Law Enforcement: Should governments implement strict regulations on the use of generative AI in law enforcement, including transparency and oversight measures? What are the potential benefits and drawbacks of such regulations?
  • Bias and Accuracy Concerns: Can generative AI accurately process complex language nuances, or will it perpetuate existing biases in policing? How can we ensure that AI-generated reports do not compromise justice?
  • Transparency and Accountability: Is a disclaimer sufficient to address concerns about AI-generated police reports, or are more substantial measures needed to maintain public trust in the criminal justice system?

Original Article

A bill headed to the Senate floor in Utah would require officers to disclose if a police report was written by generative AI. The bill, S.B. 180, requires a department to have a policy governing the use of AI. This policy would mandate that police reports created in whole or in part by generative AI have a disclaimer that the report contains content generated by AI and requires officers to legally certify that the report was checked for accuracy.

S.B. 180 is unfortunately a necessary step in the right direction when it comes to regulating the rapid spread of police using generative AI to write their narrative reports for them. EFF will continue to monitor this bill in hopes that it will be part of a larger conversation about more robust regulations. Specifically, Axon, the makers of tasers and the salespeople behind a shocking amount of police and surveillance tech, has recently rolled out a new product, Draft One, which uses body-worn camera audio to generate police reports. This product is spreading quickly in part because it is integrated with other Axon products which are already omnipresent in U.S. society.

But it’s going to take more than a disclaimer to curb the potential harms of AI-generated police reports.

As we’ve previously cautioned, the public should be skeptical of AI’s ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms, and slang people use. As online content moderation has shown, software may have a passable ability to capture words, but it often struggles with content and meaning. In a tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change the content of a police report.

Moreover, so-called artificial intelligence taking over consequential tasks and decision-making has the power to obscure human agency. Police officers who deliberately exaggerate or lie to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply did not capture what was happening in the chaotic video.

As this technology spreads without much transparency, oversight, or guardrails, we are likely to see more cities, counties, and states push back against its use. Out of fear that AI-generated reports would complicate and compromise cases in the criminal justice system,prosecutors in King County, Washington (which includes Seattle) have instructed officers not to use the technology for now.

The use of AI to write police reports is troubling in ways we are accustomed to, but also in new ways. Not only do we not yet know how widespread use of this technology will affect the criminal justice system, but because of how the product is designed, there is a chance we won’t even know if AI has been used even if we are staring directly at the police report in question. For that reason, it’s no surprise that lawmakers in Utah have introduced this bill to require some semblance of transparency. We will likely see similar regulations and restrictions in other states and local jurisdictions, and possibly even stronger ones. 

Share This Article

Hashtags for Sharing

Comments