Europe Adopts AI Act
In brief
- The European Union has adopted the AI Act, a comprehensive legal framework on artificial intelligence.
- This law sets out rules for AI developers and deployers to ensure trustworthy AI in Europe.
- It defines four levels of risk for AI systems and prohibits eight practices, including harmful AI-based manipulation and deception.
- The AI Act will help protect Europeans from unfair decisions made by AI systems, such as in hiring or public benefit schemes, and will become effective in stages, with some provisions already in place since February 2025, and the EU will continue to monitor its implementation.
Terms in this brief
- AI Act
- The AI Act is a comprehensive legal framework adopted by the European Union to regulate artificial intelligence. It ensures that AI systems used in Europe are trustworthy and prohibits harmful practices like manipulation and deception. The law aims to protect individuals from unfair decisions made by AI, such as in hiring or public benefits, and will be implemented gradually starting from February 2025.
Read full story at Shaping Europe’s digital future →
More briefs
Marshfield Residents Protest AI Data Center Plans
Hundreds of residents gathered at Marshfield High School to voice concerns over a planned AI data center by Lumon Solutions. The facility, set on five acres near Rifle Range Road, began site preparation without prior public announcement, sparking outrage. Residents questioned the project's environmental impact, water usage, and transparency. Seth Atkison highlighted worries about heat management, while others raised concerns about water resources and lack of information. Webster County commissioners are exploring legal options to halt the project, citing limited oversight authority. Dale Fraker confirmed the county has hired Carnahan Evans law firm to review state statutes. The developer, Lumon Solutions, remains silent on the issue. Opponents like Christine Vande Griend feel more answers are needed about potential environmental risks. While the meeting addressed many concerns, it left residents with lingering questions about the project's future.
Vermont Attorney General to Co-Lead National AI Committee
Vermont Attorney General Charity Clark will help lead a national committee on artificial intelligence and internet safety. She will work with attorneys general from all 50 states to push for stronger consumer protection laws. The committee will focus on setting rules around AI and promoting a safer internet for children and adults. Clark said Americans deserve better privacy protections. She will share the post with Arkansas Attorney General Tim Griffin. The committee's work will affect 330 million Americans and help set standards for AI and internet safety nationwide. Vermont Attorney General Clark will now play a key role in shaping these standards. New consumer protection laws may be proposed soon.
AI Mistakes Lead to Wrongful Police Actions
A 17-year-old student was handcuffed by police after an AI-enhanced camera mistook a Doritos bag for a gun. The student was forced to his knees and searched, but no gun was found. This mistake is not isolated. A Tennessee grandmother was jailed for five months due to a facial recognition error. AI systems produce probabilities, but people often treat them as certainties. AI policing tools are used in dozens of US cities to predict crime risk and route officers. The use of AI in policing will likely continue to grow and raise more questions about its reliability and impact on people's lives.
EU Struggles to Gain Access to AI Models for Regulation
Europe's attempt to regulate artificial intelligence faces a significant hurdle as OpenAI and Anthropic, two major players in the AI industry, show differing levels of cooperation. While OpenAI has granted the European Union direct access to its GPT-5.5 Cyber model for security review, Anthropic remains elusive, with regulators still waiting after multiple meetings regarding its Mythos model. This disparity underscores Europe's reliance on voluntary compliance from AI companies, raising questions about the effectiveness of its regulatory framework. The situation highlights a critical issue: without access to these powerful AI systems, regulators cannot fully assess their potential risks or ensure they comply with new EU AI regulations set to take effect later this year. OpenAI's openness contrasts sharply with Anthropic's reluctance, creating a fragmented landscape for oversight. This imbalance could delay the implementation of much-needed safeguards and leave gaps in monitoring AI technologies that may pose significant risks. Looking ahead, the outcome of these discussions will shape how Europe approaches AI regulation. If Anthropic continues to resist, the EU may need to consider stronger enforcement mechanisms or alternative strategies to ensure compliance across all AI companies. The stakes are high as the world grapples with balancing innovation and safety in artificial intelligence.
Andover Township Plans to Ban AI Data Centers
Andover officials plan to introduce an ordinance to ban artificial intelligence data centers in the township. A proposed AI data center at a former airport sparked opposition from residents. The data center could have brought in $5 million in tax revenue, but residents cited environmental concerns and quality-of-life impacts as reasons for their opposition. The issue has deeply divided the town and led to threats against the mayor and his family. The township will now move forward with two new ordinances, one to ban data centers and another to repeal permitted use in the area where the former airport is located. The town will soon vote on these new ordinances.