SHARE
Facebook X Pinterest WhatsApp

AI Misfire: Teen Handcuffed After AI Mistakes Doritos for Gun

AI error leads police to handcuff teen after mistaking Doritos for a gun, raising new concerns over ethics in school surveillance systems.

Written By
thumbnail Ken Underhill
Ken Underhill
Oct 28, 2025
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

A 16-year-old student in Baltimore, Maryland, was handcuffed by police after an artificial intelligence (AI) system wrongly identified a bag of chips as a firearm. 

The incident has reignited debate over the accuracy, reliability, and ethics of AI-based weapon detection systems in U.S. schools.

“Police showed up, like eight cop cars, and then they all came out with guns pointed at me talking about getting on the ground,” said student Taki Allen, in an interview with local media outlet WMAR-2 News.

False alert, real consequences

The event underscores the risks of deploying untested or overconfident AI surveillance tools in sensitive public spaces. 

According to the Baltimore County Police Department, officers “responded appropriately and proportionally based on the information provided at the time.” 

However, later review revealed the alert was a false alarm — the system had confused Allen’s chip bag for a firearm.

BBC reported that the AI alert, provided by Omnilert’s technology, was reportedly sent to human reviewers who found no threat. 

Yet, the school principal failed to see the “no threat” update and contacted the school resource officer, who in turn called local law enforcement. 

The miscommunication led to the arrival of armed officers on school grounds, escalating a non-event into a traumatic experience for the student.

Procedural failure, not system flaw?

In a statement to BBC News, Omnilert said it “regrets this incident occurred and wishes to convey our concern to the student and the wider community affected.” 

The company emphasized that its system “operated as designed” and that its human verification process worked correctly — the failure, it said, came later in procedural handoff.

While Omnilert defends its technology, the company admits that “real-world gun detection is messy.” 

AI models rely on training data that may not encompass every lighting condition, object shape, or color variation. 

In this case, the system’s visual model apparently could not distinguish the reflective surface of a chip bag from a firearm.

Beyond just privacy and compliance

The misidentification highlights a growing problem with AI in safety and law enforcement — false positives that can lead to dangerous or traumatic consequences in the real-world. 

Cybersecurity governance now extends beyond data privacy and system security — it must also ensure ethical AI deployment. 

This includes auditing algorithms for bias, testing for real-world accuracy, and establishing transparent escalation procedures. 

Without proper oversight, the rapid rollout of AI surveillance tools could amplify human error rather than reduce risk.

AI ethicists argue that systems intended to protect should undergo the same level of scrutiny as cybersecurity defenses. 

How Schools Can Mitigate Risk

To prevent similar incidents, school districts and organizations adopting AI detection tools should apply a layered approach that balances safety with ethical responsibility:

  • Implement human-in-the-loop validation: Ensure all AI alerts are reviewed by trained personnel before police involvement and require a second set of human eyes before contacting law enforcement.
  • Regularly audit AI models: Test systems under varied real-world conditions to evaluate false positive rates and bias.
  • Establish clear escalation policies: Define communication chains between AI system operators, school staff, and law enforcement to prevent missteps.
  • Enhance transparency: Share AI accuracy metrics and review findings with parents and the community to build trust.
  • Adopt ethical AI frameworks: Incorporate accountability, fairness, and explainability requirements into vendor contracts and governance policies.

Together, these measures help ensure AI-driven security systems operate responsibly, minimizing harm while maintaining trust.

When automation outpaces accountability

As AI technologies expand into policing, hiring, and education, their errors can carry disproportionate consequences. Baltimore’s chip incident illustrates how a system meant to prevent violence can instead inflict harm through misinterpretation and procedural failure.

The rapid adoption of AI in schools and public safety sectors demands stronger regulatory frameworks, standardized accuracy testing, and third-party auditing. 

Mistakes like this highlight why ethical oversight is no longer optional — it is a fundamental requirement of safe AI deployment.

The Baltimore incident serves as a cautionary tale for all organizations integrating AI into decision-making and security processes.

As AI systems grow more autonomous, human accountability must keep pace. The future of cybersecurity and public safety lies not just in advanced algorithms — but in ensuring those algorithms are fair, transparent, and trustworthy.

As artificial intelligence becomes harder to distinguish from reality, investing in reliable deepfake detection tools is becoming essential for digital safety.

thumbnail Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

LayerX Exposes Critical Flaw in OpenAI’s ChatGPT Atlas Browser
Ken Underhill
Oct 28, 2025
The Shadow War: Predatory Sparrow vs. Iran’s Infrastructure
Ken Underhill
Oct 28, 2025
Critical Dell Storage Bugs Open Door to Remote Attacks
Ken Underhill
Oct 27, 2025
Chrome 0-Day Exploited by Mem3nt0 Mori in Espionage Attacks
Ken Underhill
Oct 27, 2025
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.