AI Tool May Replace Costly Cancer Gene Testing
In brief
- Scientists created a new AI tool to predict gene expression in cancer tumors.
- This tool uses digital images of biopsy slides to make predictions.
- The tool is faster and cheaper than current methods.
- It takes minutes and costs less than conventional methods, which take weeks and cost thousands of dollars.
- The tool can predict the expression of almost 5,000 genes.
- The new tool may make personalized cancer treatment available to more patients.
- It will help scientists discover new biomarkers to guide treatment decisions.
- The tool will be validated in clinical trials to improve cancer care for patients.
Terms in this brief
- Gene Expression
- The process by which information from a gene is used to create a functional product, such as a protein. In cancer research, predicting gene expression can help tailor treatments to individual patients based on how their genes are active.
Read full story at Medical Xpress →
More briefs
AI Agents Show Strong Cybersecurity Skills in New Test
A new test created by researchers at Carnegie Mellon University has shown that AI agents can find and use real security weaknesses in Google's V8 engine, which powers browsers. Among the tested models, Claude Mythos performed the best, but it costs twelve times more than GPT-5.5, which came in second. This matters because as cyber threats grow, having AI that can spot vulnerabilities is crucial for keeping systems safe. However, the high cost of advanced models like Claude Mythos could limit their use to large companies with big security teams. For now, developers and researchers need to decide whether the benefits outweigh the costs when it comes to using these tools. Looking ahead, expect more focus on making AI cybersecurity tools more affordable and accessible while ensuring they don't misuse their abilities.
OpenClaw Runs 100 AI Agents on $1.3 Million Monthly
A team led by Peter Steinberger is operating about 100 Codex instances for the open-source project OpenClaw, spending $1.3 million each month on OpenAI’s API. Steinberger views this expense as a research investment aimed at exploring software development without worrying about token costs. The project uses AI agents to code, review pull requests, and identify bugs. This approach could transform how developers work by automating routine tasks and enhancing efficiency. The scale of the operation-100 AI instances running simultaneously-is unprecedented in open-source projects, highlighting the potential for AI-driven development tools. This experiment sets a new benchmark for AI integration in software development. Watch for further insights into how this model impacts productivity and collaboration among developers.
AI Model Achieves Near-Full Performance Using Just 12.5% of Its Experts
Researchers have developed a new type of AI model called EMO, which significantly reduces the number of experts needed while maintaining high performance. Unlike traditional models that use experts based on word types, EMO uses domain-specific experts, allowing it to cut out 75% of the experts without losing much accuracy-only about one percentage point. This breakthrough could make these models more practical for devices with limited memory. This development matters because it addresses a key challenge in AI: efficiency. By using fewer experts, the model becomes lighter and faster, making it easier to deploy on less powerful hardware. The researchers showed that EMO can achieve near-full performance with just 12.5% of its experts, which is a major step forward for modular AI. This innovation opens the door for more efficient AI applications in areas like edge computing and mobile devices. As research continues, we can expect further improvements in how AI models are structured and optimized, potentially leading to even more resource-efficient systems.
AI Struggles to Match Physicists at Replicating Collider Experiments
AI systems are increasingly tested on complex scientific tasks, but a new benchmark called Collider-Bench reveals they still fall short of human expertise. Designed to evaluate whether language-model agents can reproduce experimental analyses from the Large Hadron Collider (LHC) using only public papers and open software, the benchmark highlights significant challenges. Unlike internal tools used by LHC researchers, publicly available resources lack precision, forcing AI agents to rely on physical reasoning, trial-and-error, and domain knowledge to fill gaps in information. The results show that no AI agent reliably outperforms a physicist-in-the-loop approach. Each task requires translating published analyses into executable pipelines, predicting collision event yields, and adhering to strict computational cost metrics. While the AI systems demonstrated some capabilities, they often failed qualitative assessments, such as avoiding fabrications or duplications. This suggests that while AI can assist in scientific workflows, human expertise remains crucial for accuracy and reliability. Looking ahead, researchers will likely refine these benchmarks to better align with real-world scientific challenges. The findings underscore the need for hybrid approaches where AI supports but doesn't replace human scientists. As AI tools evolve, their integration into high-energy physics could enhance discovery processes, but collaboration with experts will remain essential for success.
AI Breakthrough in Decoding EEG Signals for Better Clinical Trust
Researchers have unveiled a new method that makes neural networks more transparent when processing EEG data, a critical step toward building systems that doctors can trust. By using sparse autoencoders on three different models-SleepFM, REVE, and LaBraM-they extracted features tied to specific clinical factors like age and medication. This approach not only reveals how the AI processes information but also identifies hidden biases, such as when a patient’s age confuses the model with their medical condition. The findings highlight weaknesses in these systems, showing that certain manipulations can disrupt overall performance or make the models focus on irrelevant details. This transparency is essential for ensuring AI reliability in healthcare decisions. The researchers also developed tools to translate these hidden features into understandable EEG patterns, making it easier to spot when something goes wrong. As this technology advances, we might see more trustworthy AI systems that provide clearer insights into patient data.