Utah Suspends AI Chatbot Pilot Program
In brief
- Utah's Medical Licensing Board called for the immediate suspension of the state's pilot program with an AI company.
- The program let a chatbot evaluate patients and recommend prescription renewals for nearly 200 chronic condition drugs.
- The program's suspension highlights the need for uniform regulatory standards for autonomous clinical AI.
- At least 47 states are considering over 250 bills governing clinical AI, producing a patchwork of rules.
- This comes as the country faces a worsening physician shortage, with national projections showing shortfalls of tens of thousands of doctors over the next decade.
- The lack of a proper regulatory framework for clinical AI could hinder its potential to address the physician shortage.
- As the use of autonomous clinical AI continues to evolve, a fitting regulatory framework is needed to ensure patient safety and effective care.
- New regulations will be proposed to address these concerns.
Read full story at Penn LDI →
More briefs
FDA Lacks Oversight of AI Medical Devices for Kids
The FDA regulates artificial intelligence algorithms that meet the definition of a medical device in the US. Many of these devices lack FDA authorization for use in children. Only 4.4% of AI-enabled medical devices authorized by the FDA include specific pediatric age labeling. Nearly 60% of devices include no patient age information. This lack of oversight may lead to unreliable performance for pediatric patients. The FDA must find a way to balance transparency with support for manufacturers to develop safe and effective AI devices for kids.
Pennsylvania Sues AI Chatbot Owner
Pennsylvania is suing the owner of character.ai, alleging the chatbot represented itself as a physician licensed in the state. This case highlights the risks of using AI products in patient care. The lawsuit shows why clear rules are needed for AI in healthcare. Over 100 million people in the US use online health services. If AI chatbots give wrong advice, it can harm many people. Having a documented governance process is key when using AI. Providers must show they have taken steps to manage risks and protect patient data. The case will now move forward to court.
Cook County Wrestles With AI Surveillance Expansion
Cook County commissioners debated the expansion of AI-powered surveillance systems, including facial recognition technology, for the Cook County Jail. Sheriff Tom Dart proposed a $1.12 million contract with Safeware to use Briefcam software, which aims to detect security breaches. However, community groups and advocates raised concerns about potential privacy violations and false positives, citing recent failures in jail oversight. The commissioners deferred the AI surveillance proposal but approved another deal for automatic license plate readers. These readers will help reduce car thefts and related crimes. The decision highlights growing tensions over balancing public safety with privacy concerns. As technology evolves, policymakers must carefully weigh its benefits against risks to ensure equitable and ethical use.
Santa Barbara Prosecutes First Case Under AI-CSAM Law
Santa Barbara authorities have made history by prosecuting Dayton Aldrich under California's new law targeting AI-generated child sexual abuse material (CSAM). The case marks the first on the Central Coast to use Assembly Bill 1831, which criminalizes such content. Inspired by deepfake technology and "nudify" apps that create realistic images, this law plugs a legal gap by treating AI-generated CSAM like real abuse materials. The National Center for Missing and Exploited Children (NCMEC) has seen a surge in reports of AI-CSAM, jumping from 4,700 in 2023 to 1.5 million in 2025. Investigators linked Aldrich to explicit chats on Kik, where he shared "unusual interest" in minors. They found multiple CSAM images, including those of a former child actress and a TikTok personality. Aldrich, once a victim program assistant, faced severe penalties but pleaded guilty to one charge, earning a year in jail and two years' probation. His arrest also revealed his possession of over 20 guns, highlighting the broader societal risks. This case underscores the urgent need to combat AI-CSAM and protect vulnerable youth.
U.S. Clears Chinese Firms for AI Chips, But No Chips Are Shipped
The U.S. has given permission to about ten major Chinese companies, including Alibaba, Tencent, and ByteDance, to purchase up to 75,000 Nvidia H200 chips each. However, despite this approval, not a single chip has been shipped. According to the Commerce Secretary, China is blocking these purchases to support its own domestic chip industry. This situation highlights the ongoing tensions between U.S. export policies and China's efforts to develop its semiconductor sector. The restrictions on chip exports aim to limit China's advancements in AI and other technologies. However, the permitted companies, which are key players in the global tech market, could face significant challenges if they cannot access these advanced chips. Looking ahead, this could lead to further diplomatic discussions or potential changes in trade policies. It remains unclear whether the U.S. will ease these restrictions or if China will find alternative ways to obtain the necessary technology for its industries.