BALTIMORE, Md. — Earlier this week, teenage football player Taki Allen was surrounded and handcuffed by police after practice when an artificial intelligence surveillance system mislabeled his bag of chips as a gun. Though the school claims the action was an inevitable result of “enhanced” technology, many argue that this traumatic event was not a necessary consequence of technological improvement.
Waiting to be picked up after school, Allen said he had been eating a bag of Doritos and shoved the finished, crumpled bag into his pocket before being approached by about eight police cars. With no information other than the AI security notification, Allen recalled that the officers “made me get on my knees and put my hands behind my back and cuff me. And they searched me.”
The ACLU reports that anyone immediately surrounded by police and detained with little to no explanation could be traumatized, but that possibility is significantly heightened for a young Black man in America today. For Allen, the encounter was life-threatening. He said he thought, “Am I gonna die? Are they going to kill me?”
Many do not blame the police directly but place responsibility on the AI surveillance system’s error. A Gizmodo reporter said the incident is “further evidence that artificial intelligence is not all that intelligent.” Though the AI mistakenly detected a gun, the resulting trauma and overreaction reflect broader sociopolitical tensions and unjust historical prejudice.
The ability for AI to wield such power is deeply concerning. The ACLU argues that blame should not only fall on the school that installed the system but also on the vendor, the school’s security staff who called police, and the officers on scene. Not only did the AI flag a possible gun, but the school staff viewed the footage and still chose to call the police. When officers arrived, they had a surveillance image clearly showing Allen holding a bag of chips, not a weapon. With more human oversight and less reliance on AI, the situation might have been stopped much earlier or prevented entirely.
Even though human fault in the incident should not be ignored, there appear to be no repercussions for those involved. The ACLU noted that school superintendent Dr. Myriam Rogers said the system worked as intended.
The school’s security system was marketed as being “built with military-grade precision,” but its failure appears to have sparked the entire traumatic event. The AI system’s mistake could have led to tragedy, underscoring the dangers of relying entirely on artificial intelligence for safety measures.
Follow the Vanguard on Social Media – X, Instagram and Facebook. Subscribe the Vanguard News letters. To make a tax-deductible donation, please visit davisvanguard.org/donate or give directly through ActBlue. Your support will ensure that the vital work of the Vanguard continues.