Editorial · AI Safety
AI Agents Are Costly and Risky. Here’s Why Most Will Fail.
The promise of AI agents-intelligent systems designed to perform tasks with minimal human oversight-is undeniable. They’re supposed to revolutionize industries, boost efficiency, and deliver tenfold growth. But as the hype around agentic AI reaches new heights, a troubling reality emerges: most projects are failing. According to Gartner, over 40% of agentic AI initiatives will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls.
The allure of agentic AI is clear. Vendors and consultants promise transformative results, painting a picture of autonomous systems that operate seamlessly across organizations. But this vision often clashes with reality. Many so-called agentic AI tools are nothing more than repackaged chatbots or robotic process automation scripts. These systems lack true agency, relying on rigid prompts and predefined rules that limit their ability to adapt or learn from new situations.
The risks extend beyond technical limitations. Implementing agentic AI requires significant investment-not just in technology but also in retraining workforces and restructuring processes. Companies often underestimate these costs, leading to budget overruns and strained resources. For instance, a recent study revealed that 60% of businesses pursuing agentic AI projects face unexpected expenses, with some seeing their costs balloon by over 50%.
Moreover, the potential for failure is high when expectations are mismatched with actual capabilities. A company might deploy an AI agent to streamline customer service only to discover it struggles with nuanced conversations. This disconnect between promise and performance can erode trust in agentic AI systems and delay broader adoption.
The key to success lies in a balanced approach. While aiming for growth is tempting, focusing on incremental improvements-like reducing overhead by 10%-can provide more immediate and manageable returns. It’s essential to start small, test thoroughly, and ensure that any deployment aligns with specific business needs rather than chasing hyperbole.
As the race to embrace agentic AI accelerates, it’s crucial for organizations to maintain a critical perspective. The future of AI agents is bright, but only for those willing to invest wisely, manage risks carefully, and stay grounded in practical realities.
Editorial perspective — synthesised analysis, not factual reporting.
Terms in this editorial
- AI agents
- Intelligent systems designed to perform tasks with minimal human oversight. They aim to revolutionize industries by boosting efficiency and driving growth, but many projects fail due to high costs and technical limitations.
- Agentic AI
- A type of AI that operates autonomously, making decisions and performing tasks without direct human intervention. It's often overhyped, with many systems being little more than repackaged chatbots or rigid scripts.
If you liked this
More editorials.
The Future of AI Safety is Conversational
The rapid advancement of large language models (LLMs) has brought about a wave of innovation and concern. While these models hold immense potential across industries, their safety has become a critical issue. Recent studies highlight the vulnerabilities in LLMs when subjected to adversarial prompts, which can lead to harmful outputs. A new framework called C3LLM is emerging as a promising solution to assess these risks more accurately. Catastrophic failures in LLMs often occur during conversations rather than isolated interactions. Traditional red-teaming approaches rely on human evaluators designing specific prompts, but this method fails to capture the full spectrum of possible conversational threats. The C3LLM framework addresses this limitation by modeling conversations as multiturn dialogues using a graph where nodes represent prompts and edges represent semantic relationships between them. By constructing this graph, researchers can define probability distributions over query sequences and determine the likelihood of harmful responses. This approach provides high-confidence probabilistic bounds on attack success rates, offering a more comprehensive understanding of conversational risks. The framework uses Clopper-Pearson confidence intervals to calculate lower and upper bounds, ensuring reliable statistical certification. The implications for AI safety are significant. By focusing on conversational threats, the C3LLM framework enables researchers to develop more robust safeguards against malicious use. This shift from empirical spot-checking to statistical certification represents a major step forward in understanding and mitigating catastrophic risks in LLMs. Looking ahead, integrating such frameworks into standard AI development pipelines will be crucial. As models grow larger and more powerful, the need for rigorous safety testing becomes even more pressing. The C3LLM framework sets a new benchmark for evaluating conversational threats, paving the way for safer and more reliable AI systems in the future.
AI's Ethical Evolution: How New Benchmarks Are Redefining Model Behavior
The rapid advancement of AI models has brought about a wave of innovation, but it has also introduced complexities in understanding their ethical dimensions. Recent developments in benchmarking techniques are paving the way for more transparent and reliable evaluations of AI systems, particularly in their ability to navigate moral dilemmas. By focusing on core capabilities like reasoning, domain knowledge, and attention, researchers are creating frameworks that go beyond surface-level performance metrics. These tools not only predict how models will behave in new scenarios but also highlight their strengths and weaknesses, offering a clearer picture of ethical decision-making processes. One notable breakthrough is the introduction of ADeLe (AI Evaluation with Demand Levels), developed by Microsoft in collaboration with Princeton University and Universitat Politècnica de València. This method scores tasks across 18 core abilities, enabling direct comparison between task demands and model capabilities. For instance, while basic arithmetic problems may score low on quantitative reasoning, more complex tasks like Olympiad proofs require a higher level of analytical skill. By constructing ability profiles for each model, ADeLe reveals where AI systems excel and where they struggle, providing valuable insights into their ethical decision-making processes. The application of such benchmarks extends beyond theoretical understanding. GroundedPlanBench, another innovative framework, evaluates whether vision-language models (VLMs) can plan actions and determine locations in real-world scenarios. This approach addresses the challenge of ambiguous natural-language plans by grounding decisions in specific spatial contexts. For example, tasks like "tidy up the table" are broken down into explicit actions-grasp, place, open, and close-each tied to a specific location in an image. This method not only improves task success rates but also enhances action accuracy, demonstrating the potential for more reliable ethical AI systems. Looking ahead, these advancements in benchmarking techniques are setting the stage for a new era of AI evaluation. By focusing on structured approaches that isolate core abilities and predict model behavior in diverse scenarios, researchers can identify gaps in current benchmarks and design better ones. This forward-looking perspective is crucial as AI models continue to evolve, offering opportunities to refine ethical decision-making processes and ensure greater transparency and accountability. In conclusion, the development of ethical benchmarks represents a significant step toward understanding and improving AI's capabilities. By leveraging tools like ADeLe and GroundedPlanBench, researchers are moving beyond surface-level metrics to uncover the true potential of AI systems. As these frameworks evolve, they will play a pivotal role in shaping the future of ethical AI, offering insights that extend far beyond technical performance into the realm of moral reasoning. The road ahead is challenging, but the promise of more transparent and reliable AI systems makes it a journey worth pursuing.
The Hidden Cost of AI's Black Box in Search: Why Google Struggles to Trust Its Own Tools
The rise of AI in search engines like Google has been nothing short of transformative. Yet, as we delve deeper into how these systems operate, a troubling truth emerges: the "black box" problem is far more pervasive-and costly-than most users realize. While AI-powered features like AI Overviews and AI Mode promise to enhance our search experience, they are built on top of traditional search infrastructure, not replacing it entirely. This hybrid approach highlights a critical issue: engineers at Google cannot fully trust their own AI tools due to the opacity of machine learning models. Nikola Todorovic, Director of Software Engineering at Google Search, revealed in an interview that deploying machine learning broadly across Search is fraught with challenges. These complex models often function as "black boxes," where even the engineers who build them struggle to understand what happens beneath the surface. This lack of transparency makes debugging difficult, especially when systems change over time or models need to be replaced. For instance, SafeSearch was one of the first areas where AI could be isolated and tested because it operates outside the main search ranking flow. But even then, issues in the AI models required careful iteration without disrupting the broader system. The reliance on traditional search fundamentals beneath AI features underscores just how much faith Google still places in older, more predictable systems. While AI Overviews layer summarization and fan-out queries on top of existing retrieval and ranking processes, these tools are not standalone solutions. They depend on the same infrastructure that has been refined over decades. This hybrid approach ensures reliability but also exposes a vulnerability: if the AI models fail or behave unexpectedly, engineers lack the visibility to quickly identify and fix problems. The tension between innovation and trust is further evident in Google's decision-making around AI deployment. While the company has embraced AI for specific use cases like SafeSearch, broader adoption remains slow due to these transparency issues. Todorovic emphasized that AI Overviews and AI Mode are still built on top of traditional search systems, not replacing them entirely. This duality-using cutting-edge AI while relying on outdated infrastructure-creates a fragile balance. Looking ahead, the challenge for Google will be to strike a better balance between innovation and control. As AI becomes more integral to Search, the company must address the opacity issue head-on. One potential solution is to develop more interpretable models that provide engineers with actionable insights into how decisions are made. Additionally, investing in tools that allow for easier debugging and oversight of AI systems could help bridge the gap between black-box models and traditional search reliability. In conclusion, while AI offers immense promise for enhancing our search experience, its "black box" nature introduces hidden costs that cannot be ignored. Google's struggles with trust highlight a broader issue in the industry: the rush to adopt AI without ensuring transparency and control can lead to unintended consequences. As we move forward, the focus must shift to building AI systems that are not only powerful but also trustworthy-ensuring that engineers, and ultimately users, can rely on them with confidence.
What Nobody Is Saying About Microsoft's Co-Author Feature
Microsoft's new co-authored-by Copilot feature in VS Code has sparked concerns about privacy. The tool accesses data from Microsoft products like Bing and Edge to personalize your interactions with Copilot, but this comes at a cost to user control. The feature, designed to enhance personalization, automatically pulls data from other Microsoft services. This includes browsing history and past interactions. While the intention is to make the AI more helpful by understanding your context, it raises questions about consent and oversight. Some users worry that the constant data collection could lead to unintended consequences. For instance, if Copilot learns too much about you, it might inadvertently share sensitive information or use it in ways not intended by Microsoft's privacy policies. To address these concerns, Microsoft has provided options to disable certain features. However, many users are unaware of these settings, and the default opt-in model may leave them exposed without their knowledge. Moving forward, the key question is whether the benefits of a more personalized AI outweigh the risks to privacy. As Copilot becomes more integrated into our workflows, we must demand transparency and control over how our data is used. Balancing innovation with user autonomy will be crucial for Microsoft's success in this space.
The Hidden Cost of AI Models: Why Their Struggles with Systematic Reasoning Matter More Than You Think
Despite the hype surrounding AI models like Google's Gemma 4 and Amazon's customized LLMs, there's a critical issue that few are discussing: their persistent struggles with systematic reasoning. While these models excel in specific tasks, such as code generation or molecular-property prediction, they fall short when it comes to multi-step planning and long-term decision-making. This limitation isn't just a technical hitch-it has real-world consequences for industries relying on AI to make complex decisions. The promise of AI in drug discovery, for instance, is immense. Amazon's work with Nimbus Therapeutics shows how fine-tuned LLMs can predict molecular properties more efficiently than traditional GNNs. Yet, these models still lack the ability to reason through ambiguous scenarios or handle the spatial grounding required for robot tasks. A recent study found that most VLM-based planners fail when faced with long, complex instructions due to ambiguity in natural-language plans. This isn't just a theoretical problem-it means robots and AI systems can't reliably execute tasks in real-world environments. The limitations of AI extend beyond technical failures. They reveal a deeper issue: the overreliance on models that prioritize speed over accuracy. Gemma 4, despite its advancements, still struggles with visual tasks like OCR and chart understanding when tested against specialized GNNs. These shortcomings highlight the hidden cost of AI's rapid development-models are being deployed before they're truly ready for prime time. The future of AI isn't just about raw capability; it's about building systems that can reason systematically and handle uncertainty. Until we address these fundamental flaws, the full potential of AI will remain out of reach.