Cook County Wrestles With AI Surveillance Expansion
In brief
- Cook County commissioners debated the expansion of AI-powered surveillance systems, including facial recognition technology, for the Cook County Jail.
- Sheriff Tom Dart proposed a $1.12 million contract with Safeware to use Briefcam software, which aims to detect security breaches.
- However, community groups and advocates raised concerns about potential privacy violations and false positives, citing recent failures in jail oversight.
- The commissioners deferred the AI surveillance proposal but approved another deal for automatic license plate readers.
- These readers will help reduce car thefts and related crimes.
- The decision highlights growing tensions over balancing public safety with privacy concerns.
- As technology evolves, policymakers must carefully weigh its benefits against risks to ensure equitable and ethical use.
Terms in this brief
- Briefcam
- A surveillance software that uses AI to detect potential security issues in real-time. It helps monitor facilities by analyzing video feeds and alerting authorities to possible breaches or suspicious activities, aiming to enhance safety but raising privacy concerns.
Read full story at govtech.com →
More briefs
Santa Barbara Prosecutes First Case Under AI-CSAM Law
Santa Barbara authorities have made history by prosecuting Dayton Aldrich under California's new law targeting AI-generated child sexual abuse material (CSAM). The case marks the first on the Central Coast to use Assembly Bill 1831, which criminalizes such content. Inspired by deepfake technology and "nudify" apps that create realistic images, this law plugs a legal gap by treating AI-generated CSAM like real abuse materials. The National Center for Missing and Exploited Children (NCMEC) has seen a surge in reports of AI-CSAM, jumping from 4,700 in 2023 to 1.5 million in 2025. Investigators linked Aldrich to explicit chats on Kik, where he shared "unusual interest" in minors. They found multiple CSAM images, including those of a former child actress and a TikTok personality. Aldrich, once a victim program assistant, faced severe penalties but pleaded guilty to one charge, earning a year in jail and two years' probation. His arrest also revealed his possession of over 20 guns, highlighting the broader societal risks. This case underscores the urgent need to combat AI-CSAM and protect vulnerable youth.
U.S. Clears Chinese Firms for AI Chips, But No Chips Are Shipped
The U.S. has given permission to about ten major Chinese companies, including Alibaba, Tencent, and ByteDance, to purchase up to 75,000 Nvidia H200 chips each. However, despite this approval, not a single chip has been shipped. According to the Commerce Secretary, China is blocking these purchases to support its own domestic chip industry. This situation highlights the ongoing tensions between U.S. export policies and China's efforts to develop its semiconductor sector. The restrictions on chip exports aim to limit China's advancements in AI and other technologies. However, the permitted companies, which are key players in the global tech market, could face significant challenges if they cannot access these advanced chips. Looking ahead, this could lead to further diplomatic discussions or potential changes in trade policies. It remains unclear whether the U.S. will ease these restrictions or if China will find alternative ways to obtain the necessary technology for its industries.
Europe Adopts AI Act
The European Union has adopted the AI Act, a comprehensive legal framework on artificial intelligence. This law sets out rules for AI developers and deployers to ensure trustworthy AI in Europe. It defines four levels of risk for AI systems and prohibits eight practices, including harmful AI-based manipulation and deception. The AI Act will help protect Europeans from unfair decisions made by AI systems, such as in hiring or public benefit schemes, and will become effective in stages, with some provisions already in place since February 2025, and the EU will continue to monitor its implementation.
Marshfield Residents Protest AI Data Center Plans
Hundreds of residents gathered at Marshfield High School to voice concerns over a planned AI data center by Lumon Solutions. The facility, set on five acres near Rifle Range Road, began site preparation without prior public announcement, sparking outrage. Residents questioned the project's environmental impact, water usage, and transparency. Seth Atkison highlighted worries about heat management, while others raised concerns about water resources and lack of information. Webster County commissioners are exploring legal options to halt the project, citing limited oversight authority. Dale Fraker confirmed the county has hired Carnahan Evans law firm to review state statutes. The developer, Lumon Solutions, remains silent on the issue. Opponents like Christine Vande Griend feel more answers are needed about potential environmental risks. While the meeting addressed many concerns, it left residents with lingering questions about the project's future.
Vermont Attorney General to Co-Lead National AI Committee
Vermont Attorney General Charity Clark will help lead a national committee on artificial intelligence and internet safety. She will work with attorneys general from all 50 states to push for stronger consumer protection laws. The committee will focus on setting rules around AI and promoting a safer internet for children and adults. Clark said Americans deserve better privacy protections. She will share the post with Arkansas Attorney General Tim Griffin. The committee's work will affect 330 million Americans and help set standards for AI and internet safety nationwide. Vermont Attorney General Clark will now play a key role in shaping these standards. New consumer protection laws may be proposed soon.