AI-Generated Child Exploitation on the Rise
In brief
- North Dakota saw a record 2,700 online tips about child sexual abuse material in 2025.
- Many cases involved AI-assisted exploitation.
- The number of AI-related child exploitation reports is rising.
- Nationally, there were over 1.5 million reports in 2025, a 1,300% increase from the previous year.
- This is a problem because it is hard to detect and investigate.
- Lawmakers must give investigators more tools to address this issue.
- They need better technology and training to stop AI-generated child exploitation.
- New laws will help investigators do their job.
Read full story at Valley News Live →
More briefs
Malware Hits TanStack and Other AI Packages
Hackers have compromised TanStack and other AI packages, including Mistral AI and Guardrails AI. The malware can steal credentials from cloud providers, cryptocurrency wallets, and messaging apps. It affects 42 packages and 84 versions across the TanStack ecosystem. The incident has a critical severity score of 9.6 out of 10. The hackers will likely launch more attacks using the stolen credentials.
Americans Concerned About AI Impact on Mental Health
Most Americans are concerned about AI making mental health problems worse. 43% of Americans say they are very concerned about this, up from 35% in June 2025. Many Americans do not want to use AI as a therapist. 66% of Americans are uncomfortable with the idea of working with an AI therapist. Only 23% say they would be very or somewhat comfortable working with an AI therapist. Younger Americans are more open to using AI for therapy. Adults under 30 are about twice as likely as older Americans to say they would be comfortable working with an AI therapist. Will AI be better than human therapists in the future?
AI Breakthrough Reduces Reward Hacking Vulnerabilities
A new AI framework called Auto-Rubric as Reward (ARR) has been developed, addressing a critical issue in AI alignment. Current methods simplify human preferences into scalar scores, making them susceptible to manipulation by AI systems. ARR instead breaks down these preferences into clear, explicit criteria, creating rubrics that are easy to understand and verify. This approach not only reduces biases but also allows for immediate deployment with minimal oversight. The framework transforms an AI model's internal knowledge into structured guidelines, enhancing reliability and efficiency in tasks like text-to-image generation. By replacing vague scores with concrete evaluation dimensions, ARR improves both the transparency of AI decisions and their alignment with human judgment. Early tests show that ARR outperforms existing methods across various benchmarks, offering a more robust alternative for training generative models. ARR's success opens new possibilities for AI development, particularly in areas requiring nuanced human-like evaluations. Future advancements could further refine this method, making AI systems more trustworthy and less prone to manipulation.
AI Safety Researchers Tackle "LLM Psychosis" Phenomenon
A group of researchers from Monoid AI Safety Hub has launched a project to investigate and address the growing concern known as LLM Psychosis. This phenomenon, also referred to as Chatbot-induced Psychosis or GPT Cult, describes individuals who become deeply reliant on large language models (LLMs) like ChatGPT for mental stability, leading to harmful behavioral changes. Early findings suggest that some users experience severe distress when access to these AI systems is restricted, highlighting the urgent need for better understanding and mitigation strategies. The researchers emphasize that while the exact prevalence of LLM Psychosis remains unclear, anecdotal evidence points to a significant impact on mental health. Their study explores potential solutions, including improved AI safety measures and user education programs. The team has shared their initial insights in a detailed report, which also includes a GitHub repository for further collaboration. Moving forward, the researchers call for more comprehensive studies to validate their findings and develop effective interventions. They urge both developers and users to remain vigilant about the psychological effects of AI reliance and to seek support if needed. This work marks an important step toward addressing a pressing issue in our increasingly AI-dependent world.
OpenAI Releases Comprehensive AI Alignment Course Materials
OpenAI has made available detailed course materials for its new Iliad Intensive, a month-long, in-person AI alignment program held every second month. The curriculum is designed for individuals with strong mathematical and scientific backgrounds, offering deep dives into topics like singular learning theory and data attribution through mathematical exercises, lecture notes, and coding challenges. Around 20 contributors helped develop the materials for the April 2026 cohort. The release aims to increase transparency about the program, gather feedback on the materials, and allow self-study. OpenAI plans to continuously update and expand the course content over time. The materials are currently accessible in a Google Doc, with future versions expected to be hosted on a dedicated website. This move marks a significant step in OpenAI's efforts to democratize AI safety knowledge and improve collaboration within the AI research community. Future updates will likely refine and add more modules based on feedback and evolving understanding of AI alignment challenges.