Editorial · General AI News
The Reason AI Is Making Cybersecurity Harder - And What We Can Do About It
AI is supposed to make cybersecurity easier, faster, and more reliable. But the truth is, it’s doing the opposite. Instead of being a silver bullet for security, new AI models are creating fresh challenges that even the most advanced systems struggle to keep up with.
The recent release of powerful AI models like OpenAI’s GPT-5.4-Cyber and Anthropic’s Claude Mythos has thrown cybersecurity into chaos. These models, designed for vulnerability discovery and threat detection, are being used by malicious actors to bypass traditional security measures. The result? A new arms race where defenders can’t keep up with the speed at which AI is generating and evolving threats.
Take MIT’s CompreSSM technique as an example. While it claims to make AI models smaller and faster during training, this approach actually makes them harder to secure. By compressing models early in the training process, critical vulnerabilities are baked into the system before anyone even realizes they exist. This isn’t just a theoretical concern-it’s happening right now. The CIFAR-10 benchmark shows that compressed models maintain nearly the same accuracy as their larger counterparts, but with one major difference: they’re faster and harder to analyze for weaknesses.
Meanwhile, NIST’s draft AI cybersecurity framework is trying to address these issues, but it’s falling short. The framework focuses on securing AI systems and enhancing defense capabilities, but it fails to account for the complexity of real-world AI deployments. For instance, when AI models are used in orchestration-where one AI leads another-the hyperparameters are often defined through AI itself, making deterministic control impossible. This leaves organizations vulnerable to attacks that exploit these blind spots.
The forward-looking question is: how do we regain control? The answer lies in rethinking our approach to AI security. Instead of relying on models that are inherently unpredictable, we need to prioritize transparency and explainability. This means designing AI systems with built-in safeguards that can identify and neutralize threats before they materialize. It also means investing in hybrid solutions-combining human expertise with AI-to bridge the gap between automation and trust.
The future of cybersecurity isn’t just about keeping up with AI; it’s about staying ahead of it. If we don’t act now, the next generation of cyberattacks will be faster, more sophisticated, and even harder to detect. The time to rethink our strategy is before the next breach happens-because once it does, it might already be too late.
Editorial perspective — synthesised analysis, not factual reporting.
Terms in this editorial
- CompreSSM
- A technique that compresses AI models during training to make them smaller and faster, but this process can inadvertently introduce vulnerabilities into the system before they're detected.
- CIFAR-10
- A benchmark dataset used to evaluate machine learning models, particularly in tasks like object recognition. It's widely used because it's a standard measure of model performance, though it has limitations when applied to real-world scenarios.
If you liked this
More editorials.
The False Promise of Fusion Energy
The pursuit of fusion energy has captivated scientists and policymakers for decades. The idea that we could harness the same power that fuels our sun to produce clean, limitless energy on Earth sounds like a sci-fi fantasy made real. Yet, after billions of dollars and countless hours of research, fusion remains a distant goal, consuming more energy than it produces. While recent advancements have brought us closer to this elusive energy source, we must critically assess whether fusion is worth the investment-or if it’s just another false promise that distracts from more immediate solutions. For decades, fusion has been described as "20 years away," only to remain perpetually out of reach. Even with recent breakthroughs, such as the National Ignition Facility's demonstration that fusion can generate more energy than it consumes, the reality is far less glamorous. The facility still uses 100 times more energy than it produces-and that’s just for a single experiment. Scaling this up to a viable power plant remains a monumental challenge. Fusion requires temperatures hotter than the sun and materials capable of withstanding such extremes. While scientists have made progress in understanding plasma physics, these challenges suggest that fusion is still decades away from being a practical energy source. Meanwhile, the world faces an urgent need for clean energy solutions. Renewable energy sources like wind and solar are already viable and scalable. They don’t require futuristic breakthroughs or massive investments in experimental technologies. Yet, fusion research continues to dominate headlines and secure funding, diverting attention and resources away from proven solutions. This is not to say that fusion should be abandoned entirely-it has the potential to revolutionize energy production if realized-but it must no longer be treated as a quick fix for our current energy dilemmas. The allure of fusion lies in its promise: clean, inexhaustible energy with minimal environmental impact. But this vision has been decades in the making, and we’re still nowhere near achieving it. In contrast, renewable energy technologies are already providing tangible benefits. Wind and solar power are reducing carbon emissions today, creating jobs, and stabilizing energy prices. These solutions don’t require breakthroughs-they just need continued investment and policy support. The fusion research community often argues that the long-term benefits of fusion justify its pursuit. And while it’s true that fusion could one day transform our energy landscape, we must weigh this potential against the immediate needs of a world grappling with climate change, energy insecurity, and economic instability. If we continue to prioritize fusion over proven renewable technologies, we risk missing critical opportunities to address these challenges in a meaningful way. Ultimately, fusion should be part of a broader portfolio of energy solutions-not the sole focus. While scientists continue their noble pursuit of this clean energy source, policymakers and investors must ensure that practical, near-term solutions like wind and solar receive the attention and resources they deserve. The future of our energy system depends on it.
AI Isn't Slashing Artists' Pay-But It's Not the Game-Changer Everyone Hoped For Either
The rise of artificial intelligence has sparked heated debates about its impact on creative fields. Many feared that AI would replace artists, leading to widespread job losses and reduced earnings. However, a recent Gallup analysis based on data from the Journal of Cultural Economics reveals a more nuanced reality. While AI is reshaping artistic work, it isn't causing the catastrophic decline in artists' earnings that many predicted. The study examined various artistic roles, assigning exposure scores to measure how much each job's tasks could be assisted by generative AI. For instance, music directors and composers had an exposure score of 0.7, meaning a significant portion of their work involves composition or production that AI tools can help draft or modify. In contrast, dancers scored only 0.04, indicating minimal AI involvement due to the live presence and physical skill required in their roles. The data from 2017 to 2024 shows that earnings trends for artistic occupations with higher AI exposure are comparable to those with lower exposure. While there's a slight positive trend in earnings for more exposed jobs, these differences aren't statistically significant. This suggests that AI isn't the job-killing force some fear-it's not even close. Yet, the narrative of AI as a revolutionary tool for artists is also overstated. The study found that while artists are using AI for idea generation and creative exploration, they're less likely to use it for operational tasks like customer interaction or equipment management. This limited application means AI is mostly aiding in the early stages of creative work-helping artists experiment, iterate quickly, and organize their workflow. It's a useful tool but not a game-changer. The broader impact on employment patterns is mixed too. Some highly exposed artistic occupations saw weaker job growth in 2023 compared to less exposed ones. However, these differences are modest, far from the widespread displacement often assumed in AI vs job debates. The total hours worked by artists actually increased starting in 2022 and remained elevated through 2024, indicating that while the nature of work is changing, employment isn't collapsing. The Gallup Workplace Panel found that artists are more likely than other workers to report using AI for creative tasks. About one-in-four artists frequently use AI, compared to one-in-five in the broader workforce. This suggests that artists are embracing AI as a productivity tool but not as a replacement for their core skills-like live performance and interpretation-that remain irreplaceable by machines. The truth about AI's impact on artists is more subtle than either side of the debate admits. It's not the job-killing ogre some fear, nor is it the revolutionary creative force others claim. Instead, AI is a helpful tool for certain aspects of artistic work but doesn't fundamentally alter the demand for human creativity and skill. Looking ahead, the real story isn't about AI replacing artists but how artists are adapting to-and sometimes resisting-this new technology. The future will likely see more nuanced integration where AI enhances certain creative tasks while leaving others untouched. For now, the evidence shows that artists can continue their work without the existential threat many predicted. But they should remain vigilant as the role of AI in creative industries continues to evolve. In short, AI's impact on artists' earnings is negligible-neither a salvation nor a disaster. It's time to move beyond hyperbolic claims and focus on the practical ways AI can enhance creativity without undermining the human touch that makes art meaningful.
Bridging the Trust Gap: How Agentic AI is Transforming Financial Operations
The rise of artificial intelligence in finance has brought unprecedented opportunities but also significant challenges. Among these, the lack of trust and governance frameworks for AI systems has been a major hurdle for CFOs and financial leaders. However, recent advancements in agentic AI are beginning to address this gap, offering solutions that prioritize transparency, control, and accountability. In traditional financial operations, manual processes and black-box AI solutions have left organizations vulnerable to errors, compliance issues, and audit challenges. This is where agentic AI comes into play. By integrating a "glass box" architecture, these systems provide end-to-end visibility into AI-driven decisions and actions. For instance, BlackLine's Agentic Financial Operations model allows finance leaders to independently validate AI outputs, ensuring that the technology aligns with financial accuracy and compliance standards. This shift is critical for CFOs who must balance innovation with the responsibility of maintaining financial integrity. One key aspect of agentic AI is its ability to unify complex workflows through a governed intelligence layer. BlackLine's Verity™ AI, for example, features a digital workforce of specialized agents designed to execute financial tasks with precision and deliver actionable insights. This level of automation not only reduces manual intervention but also enhances accuracy by embedding auditing capabilities directly into the system. Early adopters have already seen significant improvements, such as a 90% reduction in reconciliation creation time. These advancements demonstrate how agentic AI can bridge the gap between innovation and trust. The future of finance lies in leveraging AI that is both powerful and trustworthy. By prioritizing transparency and governance, agentic systems like BlackLine's Agentic Financial Operations are setting a new standard for AI adoption. As organizations continue to embrace these technologies, they will not only gain operational efficiency but also strengthen their ability to navigate an increasingly complex financial landscape. The integration of agentic AI represents a step forward in building a future where finance leaders can confidently scale AI without compromising on trust or compliance.
AI Is About to Make Mental Health Care Fairer - And Clinicians Will Love It
The integration of artificial intelligence (AI) into healthcare has long been met with both excitement and apprehension. While AI offers the promise of more efficient, accurate, and personalized care, there are valid concerns about bias, transparency, and equity-especially in mental health care. But a new framework called SAFE AI is poised to change this narrative by ensuring that AI systems are developed and deployed ethically, transparently, and with patient equity at their core. For years, the potential of AI in mental health has been clear: from crisis triage to treatment recommendations, AI tools could revolutionize how clinicians deliver care. However, this potential comes with significant risks. Without proper oversight, these systems can inadvertently reflect or amplify biases present in training data, potentially harming underserved populations. This is where SAFE AI steps in. SAFE AI-a groundbreaking framework developed by the Huntsman Mental Health Institute and published in the Journal of Medical Internet Research-directly addresses these challenges. It integrates ethical checkpoints into standard development workflows, helping organizations proactively identify and mitigate biases before they impact patient care. The framework emphasizes ongoing monitoring for "bias drift," subgroup performance evaluations, and clear communication strategies to ensure AI tools are fair and transparent. What makes SAFE AI particularly exciting is its focus on clinician-friendliness. Unlike many technical frameworks that require extensive training or specialized knowledge, SAFE AI is designed to be accessible to healthcare professionals. It provides practical guidance for small and medium-sized enterprises building medical AI technologies, ensuring that ethical considerations are woven into the fabric of AI development from the start. The impact of such a framework can't be overstated. By ensuring AI systems are not only effective but also fair and transparent, SAFE AI lays the groundwork for a future where technology enhances rather than undermines mental health care. This is especially crucial given the growing demand for precision medicine and the need to address healthcare disparities. As we look ahead, the implications of SAFE AI extend beyond mental health care. The principles it establishes-ethical development, transparency, and patient equity-are foundational for any AI system in healthcare. They challenge the industry to prioritize not just technological advancement but also social responsibility. In an era where AI's role in healthcare is expanding rapidly, frameworks like SAFE AI offer a much-needed beacon of hope. They remind us that technology can be a force for good-provided we approach its development and deployment with intention, care, and a commitment to fairness. The future of mental health care is bright, and with tools like SAFE AI leading the way, it's a future where every patient gets the equitable, ethical care they deserve.
The Rise of AI in Identity Verification: A Double-Edged Sword for Financial Security
The rapid adoption of artificial intelligence (AI) in identity verification is reshaping the financial security landscape. While AI-driven solutions offer unprecedented accuracy and efficiency, they also introduce new risks and challenges that must be carefully managed. This editorial explores how AI is being leveraged to combat fraud while simultaneously creating vulnerabilities that could undermine trust in digital systems. The use of AI in identity verification has become essential for financial institutions as they face an increasing sophistication in fraudulent activities. Companies like Vouched and Socure are leading the charge with AI-powered tools that analyze vast amounts of data to detect anomalies and verify identities in real-time. These solutions not only enhance security but also streamline the onboarding process, making it faster and more user-friendly for legitimate customers. For instance, Vouched’s IDV platform combines multiple AI models and biometric checks to ensure compliance with KYC and AML regulations, blocking fraudulent activities before they occur. However, the reliance on AI introduces potential risks. Synthetic identities created using deepfake technology can bypass traditional verification systems, creating a new frontier of fraud that even the most advanced AI tools struggle to detect. This is where companies like Microblink come in, upgrading their IDV software suites to fight against AI-driven fraud. While these advancements are crucial, they also raise ethical concerns about data privacy and bias in AI algorithms. Looking ahead, the future of identity verification will require a balanced approach that leverages AI’s strengths while mitigating its risks. Financial institutions must invest in robust cybersecurity frameworks and collaborate with regulators to establish standardized protocols for AI-driven systems. Additionally, educating consumers about the potential dangers of synthetic identities and deepfakes is vital to maintaining trust in digital platforms. In conclusion, AI holds immense promise for enhancing financial security through advanced identity verification solutions. However, its unchecked adoption could lead to significant vulnerabilities if not properly managed. By fostering collaboration between technology providers, regulators, and financial institutions, the industry can harness the benefits of AI while safeguarding against its potential pitfalls. The stakes are high, but with careful planning and execution, AI can remain a powerful tool in the fight against fraud, ensuring a secure and trustworthy digital future.