Agentic AI: Autonomy Must be Balanced with Strong Guardrails

Gavin Jackson, SVP Data & AI at Endava, discuses how Agentic AI is uniquely positioned to solve complex, multi-step problems in various sectors, from optimizing smart buildings and personalizing patient care to automating trading strategies and bolstering cybersecurity
How do you define Agentic AI, and how does it fundamentally differ from traditional chatbots or rule-based automation?
As impressive as traditional chatbots have evolved to become, they still follow scripts or fixed workflows. They respond to commands but rarely understand context.
Agentic AI, by contrast, acts more like a trusted partner than a tool. Its agents proactively seek out information, learn from multiple sources (including other agents), and adapt their behaviour over time to improve decision-making and execution.
Crucially, agentic AI also introduces more robust checks and balances. The results or recommendations these systems generate can be validated and reviewed by different stakeholders or personas to ensure accuracy, compliance and alignment with business objectives. This adds a vital layer of governance and trust to the system, helping organisations adopt AI with greater confidence.
This shift is powered by advancements in Gen AI, machine learning and cognitive computing, enabling agents to evolve, collaborate, and optimise workflows with minimal human input. Whether it’s streamlining compliance in finance or managing patient care plans in healthcare, agentic AI delivers dynamic, self-improving solutions that are far more flexible than their predecessors.
Beyond task automation, what complex, multi-step problems are agentic AIs uniquely positioned to solve that current AI models cannot?
Agentic AI stands apart from traditional models by its ability to operate autonomously across multi-step processes, not just automate isolated tasks. It’s not just about execution — it’s about goal-setting, decision-making and adaptive coordination in dynamic environments.
Take smart buildings, for instance. A single agent might adjust lighting based on occupancy, but agentic AI systems can optimise heating, cooling and energy consumption holistically — factoring in weather patterns, user preferences, and energy tariffs in real time. It’s proactive, not reactive.
In healthcare, agents are already being used to monitor patient vitals, personalise treatment plans, and allocate resources — all while responding to evolving patient data. This multi-agent orchestration can reduce strain on overburdened systems and improve patient outcomes.
In finance, agentic AI enables continuous, context-aware decision-making — automating trading strategies, identifying fraud, and dynamically managing risk across portfolios. Unlike traditional models, agents can adjust strategies as new data comes in, without needing to be re-prompted.
Cybersecurity also benefits. Instead of waiting for a threat signature, agentic systems detect anomalous behaviours, isolate threats, analyse malware, and trigger defensive actions — often before human analysts are even aware of an issue.
What makes these use cases possible isn’t just smarter algorithms — it’s AI with agency. By navigating complex systems with a sense of intent and adaptability, agentic AI isn’t replacing humans. It’s solving the kind of compound, high-stakes problems that traditional automation simply can’t handle.
How much autonomy should Agentic AI have in decision-making, and where should humans remain in the loop?
The short answer is that it depends on the context. But for critical decision-making in regulated industries, like banking, finance, and insurance, the human must remain in the loop.
Agentic AI systems are producing remarkably fast, accurate results, and it’s tempting to trust them implicitly. But in sectors governed by strict compliance and auditability, that trust must be earned and verified. A bank, for instance, must be able to explain what happened to customer data at every stage of a workflow. While large language models offer a compelling interface for insights, they don’t always offer clear traceability. That opacity can quickly erode trust — and when trust is lost, innovation slows and risk aversion takes over.
This is why autonomy must be balanced with strong guardrails. Regulation isn’t an obstacle to innovation; it’s a framework that enables it responsibly. Transparency, traceability, and human oversight should all be built into agentic AI systems from day one.
There are three principles we believe can guide this balance:
- Data transparency: Governance frameworks should log, tag, and surface every data transaction. This enables better auditability — even when inputs are imperfect.
- Division of AI labour: Agentic AI performs best when broken into small, task-specific agents that contribute to a broader workflow. This makes the system easier to monitor, test, and explain.
- Dynamic scaling with oversight: Agentic systems can accelerate processes — but escalation points, accountability, and human decision gates must remain embedded.
Ultimately, it’s critical for decision makers to recognise AI as a tool that can augment human capabilities. It’s also worth noting that Agentic AI doesn’t just automate tasks. It can surface a much broader and richer range of information to human decision-makers. That enhanced visibility can dramatically improve decision quality, as humans are empowered with deeper context and more relevant inputs than ever before.
What’s the next frontier for Agentic AI—will we see AI “agents” collaborating like human teams?
Absolutely. The future of Agentic AI lies in the coordination of multi-agent teams that mirror the way human teams collaborate to solve complex challenges. At Endava, we’ve already begun to realise this vision with the launch of our agentic AI industry accelerator, known internally as ‘Morpheus’.
Morpheus marks a new era in the evolution of AI — one where multiple autonomous agents can work together, drawing on shared context, exchanging information, and adapting dynamically to meet real-world business goals. From healthcare to insurance and financial services to private equity, these intelligent agent teams can be deployed to manage intricate workflows, optimise resource use, and deliver faster, more accurate outcomes.
Our approach puts people at the heart of AI design. We see a future where AI doesn’t replace human teams, but enhances them — supporting better collaboration, faster innovation, and greater agility in tackling complex problems. And as multi-agent systems continue to mature, we’ll see them become integral, trusted collaborators in the enterprise, unlocking value we’ve only begun to imagine.
Will open-source models keep pace with proprietary Agentic AI systems, or will there be a widening gap?
Open-source models are evolving rapidly and will almost certainly play a critical role in the Agentic AI space. In many cases, they’ve already proven their ability to keep pace with, and even out-innovate, proprietary systems, particularly when it comes to transparency, customisability, and cost-efficiency. However, their success depends on how they’re deployed and integrated.
At Endava, we advise against a one-cloud-fits-all approach. In the more mature proprietary space, we see that each major provider brings its own strengths — Google excels in multi-modal AI, Microsoft leads in enterprise productivity, and AWS dominates scalable ML deployment. Proprietary agents like Agentforce or Azure Copilot offer deep integrations, but may come with trade-offs around flexibility and lock-in.
That’s why a strategic, multi-model approach is so important. Businesses should focus on solving specific challenges with the best tools available, regardless of whether they’re open-source or proprietary.