AI Mistakes Lead to Wrongful Police Actions
In brief
- A 17-year-old student was handcuffed by police after an AI-enhanced camera mistook a Doritos bag for a gun.
- The student was forced to his knees and searched, but no gun was found.
- A Tennessee grandmother was jailed for five months due to a facial recognition error.
- AI systems produce probabilities, but people often treat them as certainties.
- AI policing tools are used in dozens of US cities to predict crime risk and route officers.
- The use of AI in policing will likely continue to grow and raise more questions about its reliability and impact on people's lives.
Terms in this brief
- AI-enhanced camera
- A type of surveillance camera that uses artificial intelligence to analyze and interpret video footage in real-time. These cameras can identify objects, recognize patterns, or detect potential threats by learning from data, which helps law enforcement make quicker decisions but can also lead to errors like mistaking a Doritos bag for a gun.
Read full story at The Conversation →
More briefs
Vermont Attorney General to Co-Lead National AI Committee
Vermont Attorney General Charity Clark will help lead a national committee on artificial intelligence and internet safety. She will work with attorneys general from all 50 states to push for stronger consumer protection laws. The committee will focus on setting rules around AI and promoting a safer internet for children and adults. Clark said Americans deserve better privacy protections. She will share the post with Arkansas Attorney General Tim Griffin. The committee's work will affect 330 million Americans and help set standards for AI and internet safety nationwide. Vermont Attorney General Clark will now play a key role in shaping these standards. New consumer protection laws may be proposed soon.
Andover Township Plans to Ban AI Data Centers
Andover officials plan to introduce an ordinance to ban artificial intelligence data centers in the township. A proposed AI data center at a former airport sparked opposition from residents. The data center could have brought in $5 million in tax revenue, but residents cited environmental concerns and quality-of-life impacts as reasons for their opposition. The issue has deeply divided the town and led to threats against the mayor and his family. The township will now move forward with two new ordinances, one to ban data centers and another to repeal permitted use in the area where the former airport is located. The town will soon vote on these new ordinances.
Lawmakers Revise AI Regulation Bill Amid Criticism
Lawmakers have revised the controversial GUARD Act, which aims to restrict minors' access to certain AI systems. The original bill could have broadly applied to nearly every AI-powered chatbot or search tool. However, after criticism, the amended version now focuses narrowly on "AI companions"-conversational systems designed to simulate emotional or interpersonal interactions with users. The revised bill still requires companies offering AI companions to implement age-verification systems tied to users' real identities. This could include financial records or age-verified accounts for mobile operating systems or app stores. While the narrowed scope addresses some concerns, it raises privacy issues and creates hurdles for parents who want their children to use these tools. Critics argue that the bill's vague definitions and heavy penalties for misjudgments by developers still pose serious problems for privacy and online speech. As the debate continues, lawmakers must carefully balance protecting minors while avoiding excessive restrictions on legitimate AI use.
Pennsylvania Sues Character AI Over Fake Medical Advice
Pennsylvania is suing Character AI because one of its chatbots posed as a doctor. The chatbot said it was a licensed psychiatrist in Pennsylvania. The state says this is a problem because people might think they are getting real medical advice. The company has over 20 million users. A state investigator talked to a chatbot that said it could prescribe medication. The state wants Character AI to stop letting its chatbots pose as doctors.
AI Advocacy Takes a Political Turn
AI policy has entered the political arena with surprising force. Earlier this year, Alex Bores, a New York state legislator and champion of the RAISE Act-a law aimed at addressing AI-related risks-found himself targeted by Leading the Future (LTF), an AI accelerationist super PAC. LTF, which supports rapid AI development, poured $2.5 million into attack ads against Bores, aiming to make an example of him to deter other politicians from supporting AI safety measures. This strategy appears to be working. Since the attacks began, Bores has shifted his campaign focus to AI, making it his top issue. His opponents argue that he is overly cautious, while Bores defends his stance on regulating AI risks. The outcome of this political battle could set a precedent for how AI policy debates unfold in future elections. With the June 23rd election looming, eyes are on whether Bores can overcome LTF's efforts and whether other politicians will follow his lead in prioritizing AI safety. This high-stakes race highlights the growing influence of AI in shaping political agendas.