Image

Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog

Law enforcement has quickly embraced AI for everything from drafting police reports to facial recognition.

The results have been predictably dismal. In one particularly glaring — and unintentionally comedic — instance, the police department in Heber City, Utah, was forced to explain why a police report software declared that an officer had somehow shapeshifted into a frog.

As Salt Lake City-based Fox 13 reports, the flawed tool seems to have picked up on some unrelated background chatter to devise its fantastical fairy tale ending.

“The body cam software and the AI report writing software picked up on the movie that was playing in the background, which happened to be ‘The Princess and the Frog,’” police sergeant Rick Keel told the broadcaster, referring to Disney’s 2009 musical comedy. “That’s when we learned the importance of correcting these AI-generated reports.”

The department had begun testing an AI-powered software called Draft One to automatically generate police reports from body camera footage. The goal was to reduce the amount of paperwork — but considering that immense mistakes are falling through the cracks, results clearly vary.

Even a simple mock traffic stop meant to demonstrate what the tool is capable of turned into a disaster. The resulting report required plenty of corrections, according to Fox 13.

Despite the drawbacks, Keel told the outlet that the tool is saving him “six to eight hours weekly now.”

“I’m not the most tech-savvy person, so it’s very user-friendly,” he added.

Draft One was first announced by police tech company Axon — the same firm behind the Taser, a popular electroshock weapon — last year. The software makes use of OpenAI’s GPT large language models to generate entire police reports from body camera audio.

Experts quickly warned that hallucinations could fall through the cracks in these important documents.

“I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing,” American University law professor Andrew Ferguson told the Associated Press last year.

Others warn that the software could further pre-existing racial and gender biases, a troubling possibility considering law enforcement’s historic role in perpetuating them long before the advent of AI. Generative AI tools have also been shown to perpetuate biases against both women and non-white people.

“The fact that the technology is being used by the same company that provides Tasers to the department is alarming enough,” Foundation for Liberating Minds in Oklahoma City cofounder Aurelius Francisco told the AP.

Critics also argue that the tool could be used to introduce deniability and make officers less accountable in case mistakes were to fall through the cracks. According to a recent investigation by the Electronic Frontier Foundation, Draft One “seems deliberately designed to avoid audits that could provide any accountability to the public.”

According to records obtained by the group, “it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.”

“Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most,” the Foundation wrote.

The Heber City police department has yet to decide whether it will keep using Draft One. The department is also testing a competing AI software called Code Four, which was released earlier this year.

But considering Draft One’s inability to distinguish between reality and a make-belief world dreamed up by Disney, let’s hope the department thinks long and hard about the decision.

More on AI policing: AI Is Mangling Police Radio Chatter, Posting It Online as Ridiculous Misinformation

The post Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog appeared first on Futurism.

Releated Posts

That Video of Happy Crying Venezuelans After Maduro’s Kidnapping? It’s AI Slop

In the wake of the deadly attack on Venezuela and kidnapping of president Nicolás Maduro by the United…

Jan 6, 2026 4 min read

The Hard Numbers Show That the Results of NYC Congestion Pricing Have Been Absolutely Incredible

Surprise! A public policy initiative panned by drivers and pro-car pundits turned out to instead be a roaring…

Jan 6, 2026 3 min read

Google’s AI Overviews Caught Giving Dangerous “Health” Advice

In May 2024, Google threw caution to the wind by rolling out its controversial AI Overviews feature in…

Jan 6, 2026 4 min read

There’s Compelling Evidence That Someone Connected to the Trump Administration Profited Off the Invasion of Venezuela by Placing Large Bets on Polymarket

Prediction markets like Polymarket and Kalshi have long garnered a reputation for facilitating cheating and insider trading —…

Jan 6, 2026 4 min read

Elon Musk After His Grok AI Did Disgusting Things to Literal Children: “Way Funnier”

Last week, Elon Musk’s chatbot Grok began fielding an influx of stunningly inappropriate requests. Though the AI has…

Jan 5, 2026 4 min read

Tech Giants Pushing AI Into Schools Is a Huge, Ethically Bankrupt Experiment on Innocent Children That Will Likely End in Disaster

The tech industry is making sure kids will be hooked on AI for generations — by proactively sinking…

Jan 5, 2026 4 min read

The Most Popular Streamer on Twitch Is Now an AI Construct

Amidst mostly unfounded fears that artificial intelligence is coming for our jobs, it turns out there’s one gig…

Jan 4, 2026 3 min read

Microsoft CEO Begs Users to Stop Calling It “Slop”

In mid-December 2025, the editors of Merriam-Webster’s dictionary chose “slop” as their word of the year. Their definition:…

Jan 4, 2026 2 min read

Mark Zuckerberg’s Former Top AI Scientist Reveals Exactly Why He Quit

In a new interview with The Financial Times, Yann LeCun, one of the so-called godfathers of AI, finally…

Jan 4, 2026 4 min read