latentbrief
← Back to editorials

Editorial · General AI News

The Reason AI Is Making Cybersecurity Harder - And What We Can Do About It

2w ago

AI is supposed to make cybersecurity easier, faster, and more reliable. But the truth is, it’s doing the opposite. Instead of being a silver bullet for security, new AI models are creating fresh challenges that even the most advanced systems struggle to keep up with.

The recent release of powerful AI models like OpenAI’s GPT-5.4-Cyber and Anthropic’s Claude Mythos has thrown cybersecurity into chaos. These models, designed for vulnerability discovery and threat detection, are being used by malicious actors to bypass traditional security measures. The result? A new arms race where defenders can’t keep up with the speed at which AI is generating and evolving threats.

Take MIT’s CompreSSM technique as an example. While it claims to make AI models smaller and faster during training, this approach actually makes them harder to secure. By compressing models early in the training process, critical vulnerabilities are baked into the system before anyone even realizes they exist. This isn’t just a theoretical concern-it’s happening right now. The CIFAR-10 benchmark shows that compressed models maintain nearly the same accuracy as their larger counterparts, but with one major difference: they’re faster and harder to analyze for weaknesses.

Meanwhile, NIST’s draft AI cybersecurity framework is trying to address these issues, but it’s falling short. The framework focuses on securing AI systems and enhancing defense capabilities, but it fails to account for the complexity of real-world AI deployments. For instance, when AI models are used in orchestration-where one AI leads another-the hyperparameters are often defined through AI itself, making deterministic control impossible. This leaves organizations vulnerable to attacks that exploit these blind spots.

The forward-looking question is: how do we regain control? The answer lies in rethinking our approach to AI security. Instead of relying on models that are inherently unpredictable, we need to prioritize transparency and explainability. This means designing AI systems with built-in safeguards that can identify and neutralize threats before they materialize. It also means investing in hybrid solutions-combining human expertise with AI-to bridge the gap between automation and trust.

The future of cybersecurity isn’t just about keeping up with AI; it’s about staying ahead of it. If we don’t act now, the next generation of cyberattacks will be faster, more sophisticated, and even harder to detect. The time to rethink our strategy is before the next breach happens-because once it does, it might already be too late.

Editorial perspective — synthesised analysis, not factual reporting.

Terms in this editorial

CompreSSM
A technique that compresses AI models during training to make them smaller and faster, but this process can inadvertently introduce vulnerabilities into the system before they're detected.
CIFAR-10
A benchmark dataset used to evaluate machine learning models, particularly in tasks like object recognition. It's widely used because it's a standard measure of model performance, though it has limitations when applied to real-world scenarios.

If you liked this

More editorials.