AI Breakthrough Speeds Brain Tumor Diagnosis
In brief
- A new AI model named GMAP can predict key genetic features of brain tumors directly from routine tissue slides, which are already widely used in hospitals.
- Traditionally, identifying these genetic markers requires weeks of specialized testing.
- However, GMAP reads the existing slides to detect four critical genetic traits, including IDH mutations and chromosome alterations, with accuracy matching or exceeding current methods.
- The model was trained on data from over 877 patients and tested across 13 external hospitals, achieving an impressive 93% accuracy in internal testing and above 87% in real-world settings.
- This breakthrough could significantly reduce the time needed for diagnosis, particularly benefiting areas with limited access to advanced genetic testing.
- GMAP’s success lies in its ability to identify patterns that align with what human pathologists expect, such as specific cell shapes and tissue arrangements.
- This not only speeds up treatment decisions but also provides insights into how these genetic changes manifest visually in the slides.
- The development of GMAP marks a major step toward more efficient and accessible brain tumor diagnostics worldwide.
Terms in this brief
- GMAP
- A new AI model that predicts key genetic features of brain tumors directly from routine tissue slides, significantly speeding up diagnosis and reducing reliance on time-consuming specialized testing.
Read full story at Earth.com →
More briefs
AI Tool May Replace Costly Cancer Gene Testing
Scientists created a new AI tool to predict gene expression in cancer tumors. This tool uses digital images of biopsy slides to make predictions. The tool is faster and cheaper than current methods. It takes minutes and costs less than conventional methods, which take weeks and cost thousands of dollars. The tool can predict the expression of almost 5,000 genes. The new tool may make personalized cancer treatment available to more patients. It will help scientists discover new biomarkers to guide treatment decisions. The tool will be validated in clinical trials to improve cancer care for patients.
AI Agents Show Strong Cybersecurity Skills in New Test
A new test created by researchers at Carnegie Mellon University has shown that AI agents can find and use real security weaknesses in Google's V8 engine, which powers browsers. Among the tested models, Claude Mythos performed the best, but it costs twelve times more than GPT-5.5, which came in second. This matters because as cyber threats grow, having AI that can spot vulnerabilities is crucial for keeping systems safe. However, the high cost of advanced models like Claude Mythos could limit their use to large companies with big security teams. For now, developers and researchers need to decide whether the benefits outweigh the costs when it comes to using these tools. Looking ahead, expect more focus on making AI cybersecurity tools more affordable and accessible while ensuring they don't misuse their abilities.
OpenClaw Runs 100 AI Agents on $1.3 Million Monthly
A team led by Peter Steinberger is operating about 100 Codex instances for the open-source project OpenClaw, spending $1.3 million each month on OpenAI’s API. Steinberger views this expense as a research investment aimed at exploring software development without worrying about token costs. The project uses AI agents to code, review pull requests, and identify bugs. This approach could transform how developers work by automating routine tasks and enhancing efficiency. The scale of the operation-100 AI instances running simultaneously-is unprecedented in open-source projects, highlighting the potential for AI-driven development tools. This experiment sets a new benchmark for AI integration in software development. Watch for further insights into how this model impacts productivity and collaboration among developers.
AI Model Achieves Near-Full Performance Using Just 12.5% of Its Experts
Researchers have developed a new type of AI model called EMO, which significantly reduces the number of experts needed while maintaining high performance. Unlike traditional models that use experts based on word types, EMO uses domain-specific experts, allowing it to cut out 75% of the experts without losing much accuracy-only about one percentage point. This breakthrough could make these models more practical for devices with limited memory. This development matters because it addresses a key challenge in AI: efficiency. By using fewer experts, the model becomes lighter and faster, making it easier to deploy on less powerful hardware. The researchers showed that EMO can achieve near-full performance with just 12.5% of its experts, which is a major step forward for modular AI. This innovation opens the door for more efficient AI applications in areas like edge computing and mobile devices. As research continues, we can expect further improvements in how AI models are structured and optimized, potentially leading to even more resource-efficient systems.
AI Struggles to Match Physicists at Replicating Collider Experiments
AI systems are increasingly tested on complex scientific tasks, but a new benchmark called Collider-Bench reveals they still fall short of human expertise. Designed to evaluate whether language-model agents can reproduce experimental analyses from the Large Hadron Collider (LHC) using only public papers and open software, the benchmark highlights significant challenges. Unlike internal tools used by LHC researchers, publicly available resources lack precision, forcing AI agents to rely on physical reasoning, trial-and-error, and domain knowledge to fill gaps in information. The results show that no AI agent reliably outperforms a physicist-in-the-loop approach. Each task requires translating published analyses into executable pipelines, predicting collision event yields, and adhering to strict computational cost metrics. While the AI systems demonstrated some capabilities, they often failed qualitative assessments, such as avoiding fabrications or duplications. This suggests that while AI can assist in scientific workflows, human expertise remains crucial for accuracy and reliability. Looking ahead, researchers will likely refine these benchmarks to better align with real-world scientific challenges. The findings underscore the need for hybrid approaches where AI supports but doesn't replace human scientists. As AI tools evolve, their integration into high-energy physics could enhance discovery processes, but collaboration with experts will remain essential for success.