Scaling Agentic AI: Balancing Autonomy and Overcoming Challenges

According to Ravi Shankar, SVP & Chief Marketing Officer at Denodo, unlike conventional chatbots or rule-based systems, Agentic AI models are designed to act proactively, adapt to new situations, and optimize outcomes based on real-time data. This shift enables them to tackle complex, multi-step problems, from self-driving vehicles to personalized healthcare, that current AI models struggle to address
How do you define Agentic AI, and how does it fundamentally differ from traditional chatbots or rule-based automation?
Agentic AI refers to artificial intelligence systems capable of operating autonomously, making decisions, and performing tasks with minimal human intervention. This fundamentally differs from traditional chatbots or rule-based automation because Agentic AI models, or “AI agents,” act proactively, adapt to new situations, and optimize outcomes based on real-time data.
Unlike their predecessors, which rely on predefined inputs and deliver predefined outputs, Agentic AI exhibits goal-oriented behavior, situational awareness, and self-optimization. They learn from new data and experiences, continuously improving performance through feedback loops, enabling a more dynamic and intelligent form of automation beyond simple task execution.
Beyond task automation, what complex, multi-step problems are agentic AIs uniquely positioned to solve that current AI models cannot?
Beyond basic task automation, Agentic AI is uniquely positioned to solve complex, multi-step problems that current AI models struggle with due to their autonomous and adaptive nature. For example – applications such as self-driving cars and drones that navigate without human control, which involve continuous real-time decision-making in dynamic environments.
In healthcare, Agentic AI can provide personalized treatment recommendations and assist in robot-assisted surgeries, requiring complex analysis and adaptive responses. Algorithmic trading in finance, supply chain optimization, and autonomous cybersecurity systems also exemplify multi-step problems where Agentic AI’s ability to analyze data, make informed choices, and self-optimize in real-time offers solutions beyond the capabilities of traditional, less autonomous AI models.
How much autonomy should Agentic AI have in decision-making, and where should humans remain in the loop?
Agentic AI should have significant autonomy in decision-making to leverage its benefits, such as autonomous decision-making and increased efficiency. However, the importance of human-AI collaboration is paramount, where AI enhances decision-making by complementing human expertise. Humans should remain in the loop for ethical oversight, accountability, and continuous monitoring.
Best practices include defining clear objectives, adopting ethical AI practices, monitoring AI decision-making for transparency, and optimizing for explainability. This ensures that while AI agents operate independently, human intervention is possible to prevent bias, protect data privacy, and address ethical and regulatory concerns, maintaining a balance between AI autonomy and human control.
What infrastructure challenges (such as power, cooling, latency) arise when deploying Agentic AI at scale?
Deploying Agentic AI at scale indeed introduces serious infrastructure challenges, particularly around power, cooling, and latency. But, aside from infrastructure, Agentic AI also faces considerable data-centric hurdles. According to Gartner, 30% of GenAI projects will fail or be canceled by the end of 2025, and it will primarily be that many companies have data that is spread out, outdated, or hard to understand.
In large organizations, data often comes from many different systems, making it hard for AI to get what it needs. When the data is poor, the AI can make mistakes or even give made-up answers. So, to work well, Agentic AI needs both strong infrastructure and good, reliable data.
What’s the next frontier for Agentic AI—will we see AI “agents” collaborating like human teams?
“Multi-Agent Collaboration” is identified as a future trend in Agentic AI, stating that we will see “AI agents working in teams to solve complex real-world challenges.” This aligns with the idea of AI “agents” collaborating like human teams. Other future trends mentioned, such as “AI-Powered Digital Employees” and “Autonomous AI Research Assistants,” further support this vision of increasingly sophisticated and collaborative AI entities.
The emphasis on multi-agent systems as a core component of how Agentic AI works also underscores the foundational capability for such collaboration. This suggests a future where AI agents will not only operate autonomously but also interact and cooperate to tackle problems that require collective intelligence and coordinated efforts.
Will open-source models keep pace with proprietary Agentic AI systems, or will there be a widening gap?
Open-source models have continued to thrive on community ingenuity, especially in areas like Retrieval-Augmented Generation (RAG), which offers dynamic, secure enterprise information access without costly model retraining. This flexibility levels the playing ground for everyone by minimizing cost and risk—two barriers open-source solutions are more likely to circumvent faster than closed systems.
However, enterprise-wide roll-out of RAG still suffers from problems, particularly with regard to silos of data, integrational complexity, and real-time performance. Proprietary solutions may enjoy temporary advantages due to better support infrastructure, but open-source communities have proven resilient in taking up the slack, often leading the way in transparency, flexibility, and access.
Even if proprietary systems move ahead in certain areas, the open-source community is able to catch up—especially as businesses prioritize speed, cost savings, and socially responsible innovation in AI projects.