
The AI tool could also possibly be "overstepping in its interpretation of the audio," possibly misinterpreting slang or adding context that never happened.A "major concern," the EFF said, is that the AI reports can give cops a "smokescreen," perhaps even allowing them to dodge consequences for lying on the stand by blaming the AI tool for any "biased language, inaccuracies, misinterpretations, or lies" in their reports."Theres no record showing whether the culprit was the officer or the AI," the EFF said.
"This makes it extremely difficult if not impossible to assess how the system affects justice outcomes over time."According to the EFF, Draft One "seems deliberately designed to avoid audits that could provide any accountability to the public." In one video from a roundtable discussion the EFF reviewed, an Axon senior principal product manager for generative AI touted Draft One's disappearing drafts as a feature, explaining, "we dont store the original draft and thats by design and thats really because the last thing we want to do is create more disclosure headaches for our customers and our attorneys offices."The EFF interpreted this to mean that "the last thing" that Axon wants "is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit).""To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it's used," the EFF said.
"But Axon has intentionally made this difficult."The EFF is calling for a nationwide effort to monitor AI-generated police reports, which are expected to be increasingly deployed in many cities over the next few years, and published a guide to help journalists and others submit records requests to monitor police use in their area.
But "unfortunately, obtaining these records isn't easy," the EFF's investigation confirmed.
"In many cases, it's straight-up impossible."