Fair and Just Prosecution Warns AI in Law Enforcement May Breach Constitution

Licensed under the Unsplash+ License

By Vicky Li

WASHINGTON — Fair and Just Prosecution (FJP), an organization focused on criminal justice reform, released a brief Tuesday detailing concerns about incorporating artificial intelligence tools into the justice system. Concerns about accuracy, due process and public trust are emerging as police departments begin turning to AI to draft official reports.

In one key finding, the FJP brief discussed the high error rates and “hallucinations” — the tendency of AI systems to generate false or misleading information.

The organization noted that, while these tools can recognize patterns and mimic human speech, they currently cannot distinguish truth from fiction. This technology poses risks to the justice system because AI trained on historical data may amplify racial and discriminatory biases, according to FJP.

Another issue raised by FJP is the potential for constitutional violations. Even minor inaccuracies in a police report could lead to major legal consequences, the group warned. For example, if an officer found contraband in plain view during a traffic stop but AI misreported it as being inside the glove box, the court could rule the search unlawful.

FJP also cautioned that AI threatens individual privacy. AI systems cannot reliably distinguish between private information and relevant evidence, which can result in sensitive details being shared in police reports or with third parties.

“This lack of control introduces significant risks, including data misuse, unauthorized access, and the erosion of public trust in law enforcement practices,” FJP wrote.

The organization also disputed claims that AI significantly improves efficiency. According to independent studies such as Axon’s Draft One, there is no conclusive evidence that AI speeds up report writing.

Axon’s study suggested that because many departments already have efficient paperwork processes and because AI requires extensive data entry, any time-saving benefits are likely minimal.

FJP Executive Director Aramis Ayala voiced strong opposition to using AI in law enforcement, emphasizing that police reports heavily influence courtroom decisions and cannot afford to contain errors or bias.

“When AI language models generate false narratives, real people pay the price,” Ayala said.

“AI-generated reports have already included officers who were not present, misattributed actions, and warped evidence. The stakes are too high to treat this like just another tech upgrade. We owe it to our communities to pause, scrutinize, and demand transparency before this tech is allowed anywhere near the courtroom,” she added.

FJP observed that there are currently no safeguards, auditing mechanisms or bias mitigation protocols in place for AI-generated reports. Without such protections, adopting AI into the justice system could further erode public trust, the organization warned.

FJP urged law enforcement agencies to use AI tools cautiously and advised prosecutors to assess whether their local police departments are using AI, and to implement clear policies or safeguards where necessary.

“Rushing to implement flawed systems risks undermining the credibility of law enforcement, eroding trust, and perpetuating systemic biases that compromise the principles of equity and justice,” the brief concluded.

Categories:

Breaking News Everyday Injustice

Tags:

Author

  • Vanguard Court Watch Interns

    The Vanguard Court Watch operates in Yolo, Sacramento and Sacramento Counties with a mission to monitor and report on court cases. Anyone interested in interning at the Courthouse or volunteering to monitor cases should contact the Vanguard at info(at)davisvanguard(dot)org - please email info(at)davisvanguard(dot)org if you find inaccuracies in this report.

    View all posts

Leave a Comment