UCF Graduates Boo Commencement Speaker Over AI Remarks
In brief
- A commencement speaker at the University of Central Florida was booed by graduates when she called artificial intelligence the next industrial revolution.
- The speaker's comments were met with thunderous boos from the crowd of arts and humanities graduates.
- Half of adults in the US are more concerned than excited about the use of AI in daily life.
- The incident highlights concerns about AI replacing jobs, with only 30% of recent graduates finding full-time jobs in 2025.
- The future of work will likely be shaped by AI.
Read full story at Orlando Sentinel →, ClickOrlando →, WTHR →
More briefs
Patients Want Human Oversight in Healthcare AI
Patients are frustrated with artificial intelligence in certain healthcare workflows. Researchers say health systems should focus investments where buy-in is already established. 52% of patients are comfortable with AI used for administrative tasks like scheduling. Patients want human oversight in healthcare AI, especially in billing and diagnosis, with 47% of respondents saying a human representative is key to building their comfort with medical billing AI, and now healthcare providers will need to adapt to these changing patient expectations.
AI Workers Struggle with Low Wages
A San Francisco-based AI company connects contractors with companies like OpenAI. The workers are treated poorly and paid low wages, with 22 percent experiencing homelessness. About 86 percent of data workers struggled to pay their bills last year. Many workers rely on public assistance programs, such as food stamps and Medicaid, to get by. The future of these workers remains uncertain.
Public Sentiment Shifts Sharply Against AI
Recent polling reveals a stark change in how Americans view artificial intelligence. A Quinnipiac poll conducted March 19-23 shows that 55% of adults now believe AI will cause more harm than good, up from 44% just a year ago. Among Gen Z, concern about AI reducing job opportunities has surged to 70%, compared to 56% last year. Additionally, 65% oppose building data centers locally, highlighting growing community resistance. This shift is evident beyond polls-mainstream media and even right-wing figures like Steve Bannon are increasingly discussing AI risks. Farmers in rural areas are organizing against local data center projects, showing how opposition to AI's infrastructure is spreading. While some places have turned to ill-advised bans on AI in mental health, smarter policies focused on mitigating risks could be more effective. Elections offer a key opportunity to shape this dialogue. Candidates like Alex Bores are gaining traction by addressing these concerns, and activists are turning public sentiment into actionable support. As the race heats up, voters' growing awareness of AI's dangers could play a pivotal role in shaping tech policies that protect both jobs and safety.
AI Struggles in TV Recommendations
AI is being tested in the tricky world of television recommendations. Alan Wolk, a media expert, fed email threads to four major AIs-ChatGPT, Claude, Gemini, and Grok-to see how they interpret emotional cues and context. Results showed that while ChatGPT, Claude, and Grok performed well by understanding key concerns like deadlines and tone, Gemini often missed the mark entirely. For example, it focused on minor details like weather or pizza instead of the main points. This highlights a major challenge: creating AI systems with high emotional intelligence to understand user intent in TV recommendations. The goal is to move from users needing precise keywords to AI guessing what viewers truly want. However, as seen with Gemini, success depends on accurate interpretation, making it a critical area for future advancements.
Malicious AI Model on Hugging Face Infected Thousands
A harmful repository on Hugging Face, pretending to be an official OpenAI release, infected over 244,000 Windows users with malware. This malicious software, disguised as a legitimate AI model, recorded keystrokes and gathered sensitive data. The attackers may have exaggerated the download numbers to appear more trustworthy. The incident highlights critical security gaps in AI platforms where malicious actors can easily mislead users. HiddenLayer's research reveals that such attacks target both developers and end-users, raising concerns about the safety of open-source AI models. This underscores the importance of verifying the source and authenticity of any AI tool before use. Moving forward, expect stricter measures from Hugging Face to vet repositories and enhance user awareness about potential threats. Developers should remain cautious when downloading AI models and check for official validations to avoid falling victim to such schemes.