Malicious AI Model on Hugging Face Infected Thousands
In brief
- A harmful repository on Hugging Face, pretending to be an official OpenAI release, infected over 244,000 Windows users with malware.
- This malicious software, disguised as a legitimate AI model, recorded keystrokes and gathered sensitive data.
- The attackers may have exaggerated the download numbers to appear more trustworthy.
- The incident highlights critical security gaps in AI platforms where malicious actors can easily mislead users.
- HiddenLayer's research reveals that such attacks target both developers and end-users, raising concerns about the safety of open-source AI models.
- This underscores the importance of verifying the source and authenticity of any AI tool before use.
- Moving forward, expect stricter measures from Hugging Face to vet repositories and enhance user awareness about potential threats.
- Developers should remain cautious when downloading AI models and check for official validations to avoid falling victim to such schemes.
Terms in this brief
Read full story at AI News →
More briefs
AI Workers Struggle with Low Wages
A San Francisco-based AI company connects contractors with companies like OpenAI. The workers are treated poorly and paid low wages, with 22 percent experiencing homelessness. About 86 percent of data workers struggled to pay their bills last year. Many workers rely on public assistance programs, such as food stamps and Medicaid, to get by. The future of these workers remains uncertain.
Public Sentiment Shifts Sharply Against AI
Recent polling reveals a stark change in how Americans view artificial intelligence. A Quinnipiac poll conducted March 19-23 shows that 55% of adults now believe AI will cause more harm than good, up from 44% just a year ago. Among Gen Z, concern about AI reducing job opportunities has surged to 70%, compared to 56% last year. Additionally, 65% oppose building data centers locally, highlighting growing community resistance. This shift is evident beyond polls-mainstream media and even right-wing figures like Steve Bannon are increasingly discussing AI risks. Farmers in rural areas are organizing against local data center projects, showing how opposition to AI's infrastructure is spreading. While some places have turned to ill-advised bans on AI in mental health, smarter policies focused on mitigating risks could be more effective. Elections offer a key opportunity to shape this dialogue. Candidates like Alex Bores are gaining traction by addressing these concerns, and activists are turning public sentiment into actionable support. As the race heats up, voters' growing awareness of AI's dangers could play a pivotal role in shaping tech policies that protect both jobs and safety.
AI Struggles in TV Recommendations
AI is being tested in the tricky world of television recommendations. Alan Wolk, a media expert, fed email threads to four major AIs-ChatGPT, Claude, Gemini, and Grok-to see how they interpret emotional cues and context. Results showed that while ChatGPT, Claude, and Grok performed well by understanding key concerns like deadlines and tone, Gemini often missed the mark entirely. For example, it focused on minor details like weather or pizza instead of the main points. This highlights a major challenge: creating AI systems with high emotional intelligence to understand user intent in TV recommendations. The goal is to move from users needing precise keywords to AI guessing what viewers truly want. However, as seen with Gemini, success depends on accurate interpretation, making it a critical area for future advancements.
Israel Leads AI Integration in Healthcare
Prof. Ran Balicer of Clalit Health Services revealed at a conference that Israel is a global leader in implementing AI for prescriptive healthcare, altering disease courses in real-time on a large scale. While other nations prioritize healthcare as a revenue source, Israel's incentives drive preventive and proactive treatment. Each month, 200,000 AI-based recommendations are adopted by general practitioners, addressing treatment gaps. For instance, an AI system detects when a patient’s medication needs adjustment based on their condition changes, alerting doctors to necessary updates. Looking ahead, Balicer emphasized the importance of responsible AI use in healthcare. He introduced Optica, a governance system to ensure AI tools are safe, unbiased, and aligned with human values. Clalit's vast medical database offers Israeli startups a competitive edge globally. While predictive medicine could save trillions, most countries rely on volume-based budgets, hindering progress. Israel’s approach highlights the potential for AI to transform healthcare, balancing innovation with ethical considerations.
Teachers Grapple With AI in the Classroom
Broomfield High School teacher Stephen Kelly noticed students submitting unusually complex projects that seemed beyond their understanding. He realized AI was likely involved after a student turned in a PhD-level report on brain-eating amoebas, which even Kelly couldn’t fully grasp. Boulder Valley School District (BVSD) prohibits AI for graded work but struggles to enforce it, as teachers report similar issues nationwide. Some educators find AI beneficial when used correctly. Chris Hespe, a history teacher, uses AI tools like MagicSchool to help students research Greek gods and write songs about the school year. He believes AI can speed up access to age-appropriate information and enhance learning. However, concerns remain about misuse, especially outside school hours where AI use is unmonitored. Kelly developed chatbot tutors using MagicSchool to guide students through their work. These bots ask questions and provide feedback, encouraging deeper understanding rather than just supplying answers. With over 50% of teens already using chatbots for schoolwork, educators are urging updated policies to keep pace with this digital shift.