Deploying AI in inappropriate ways can and does get people hurt
by Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project
On Monday night a Black Baltimore teenager, Taki Allen, had finished football practice and was sitting outside his school waiting to be picked up. That school, which is supposed to protect its students, instead brought down on his head a traumatic and potentially deadly incident thanks to its video cameras and their “enhancement” with AI.
Allen said he ate a bag of chips, crumbled it up, and put it in his pocket. Apparently an AI-enhanced surveillance camera decided that he had a gun, and soon the police swamped the scene. Allen told WBAL-TV Baltimore, “It was like eight cop cars that came pulling up to us [and] they started walking toward me with guns… They made me get on my knees and put my hands behind my back and cuff me. And they searched me.”
Allen said the police “said that an AI detector or something detected that I had a gun. But I was just holding — they showed me a picture — I was just holding a Doritos bag.”
Anyone being swarmed by eight cars’ worth of gun-waving police shouting orders at them would feel traumatized. But for a young Black man, given the history and present reality of racist policing in this country? That was a life-threatening situation. Allen said that during the incident he was thinking, “Am I gonna die? Are they going to kill me?” The violent response by police should not have happened at all, but could all-too-easily have had a very tragic ending.
One reporter noted that the story is “further evidence that artificial intelligence is not all that intelligent,” and, while true, the blame here lies not with the technology but with the human beings and institutions that created the techno-social system behind the incident.
To set up AI to be in a position to trigger this kind of response is grossly irresponsible, and the fault lies with some combination of the school that installed it, the vendor that pushed it on perhaps naïve-about-technology school officials, the school security staff who called the police, and the police, who it sounds from Allen’s account were aware that the alert came from an AI and had a still of what the AI had told them was a gun. Humans should have been in the loop here — before police were deployed, guns drawn — and recognized a Doritos bag when they saw one. (The school gave a confusing account of how the alert was transmitted to police, suggesting that a school resource officer escalated the situation — more evidence that putting police in schools is a bad idea.)
So the biggest scandal here is not that AI is imprecise (stop the presses!) but that this was allowed to happen at all — and that it doesn’t look like any of the very human people to blame are being held accountable. Quite the opposite: Schools superintendent Dr. Myriam Rogers said the system worked the way it was supposed to.
The vendor is a company called Omnilert, which touts that its AI was “built with military-grade precision” — which made me burst out laughing. “Military grade” is a hoary, over-used marketing term that is often mocked because of the hand-wavy way it evokes seriousness while being applied far beyond the few narrow areas where it may be a real thing. To apply it to today’s AI is even funnier because of how non-deterministic and lacking in precision that technology is. (I recently analyzed the state of AI machine vision here.) If this is indeed military grade technology, then any military that uses it is in serious trouble.
That an incident like this would happen was entirely predictable — and in fact I predicted it in a piece about gun-detection technology three years ago:
Like all alarm systems, and especially AI systems, there will be false positives — potentially a lot of them. Blanketing public spaces with buggy gun detectors may increase the incidence of tragic confrontations sparked by people holding cell phones, toy guns, or other everyday objects that police have mistaken for firearms.
It didn’t take any special genius or insight on my part to see this coming. All the factors — imprecise AI, AI that is “sold and marketed way beyond real-life performance,” the human predilection to give too much credence to computer alerts, and everybody’s state of high anxiety over the frequency of shootings in this country — were lined up to make it obvious that something like this would happen. And in an American policing system far too prone to shooting people — especially Black people — that is a dangerous combination.
Follow the Vanguard on Social Media – X, Instagram and Facebook. Subscribe the Vanguard News letters. To make a tax-deductible donation, please visit davisvanguard.org/donate or give directly through ActBlue. Your support will ensure that the vital work of the Vanguard continues.
So what were the school officials and the police supposed to do here? They saw what they thought was a gun so they responded. If they hadn’t acted and it turned out to be a gun and a shooting resulted you can imagine how big the uproar would be.
The focus of the article was the problem of using flawed technology and the potential for devastating consequences, by focusing on the aspect of this that you have, you have missed the whole point of the piece.
“They saw what they thought was a gun so they responded.”
No, a machine saw what it thought was a gun, so it notified humans. Humans did not evaluate the information before acting with full force. This could have ended tragically. If this happened here, it will happen again somewhere.
Lawsuits may force the issue, but clear liability of the product manufacturer and its users needs to be established by legislation. All AI output needs human oversight.
It would be worth investigating whether the algorithm imputes lawlessness on people of color.
So I guess the answer to flawed AI tech is we need more actual human security officers on our school campuses.
That is one possibility but not the only one
Don says: “It would be worth investigating whether the algorithm imputes lawlessness on people of color.”
It’s likely used more-often where lawlessness IS out of control – largely in communities with a high percentage of “people of color”. (Especially one particular color, that no one wants to openly talk about.)
As soon as I saw Baltimore . . .
I do wonder if there’s ever going to be a time when these communities change. Not much sign of it in my lifetime, at least. No one is going to change it for them.
I do find it odd that these communities never seem to clamor for change, for themselves. (They are, however, “on top of the situation” when police over-react – largely because there’s an entity to sue at that point. Follow the money – the attorneys who come out of the woodwork at that point certainly do.)
Meanwhile, the underlying (REAL) problems never change – even if a family or two become unearned millionaires by targeting the “wrong” problem.