From Automation to Autonomy: Reshma Naik on Scaling Agentic AI

In this interview, Reshma Naik, Director, Systems Engineering, Emerging EMEA at Nutanix, breaks down the intricacies of Agentic AI, its unique problem-solving capabilities, and the critical infrastructure considerations for its large-scale deployment. She also shares her vision for the future, including the exciting prospect of AI agents collaborating like human teams.
How do you define Agentic AI, and how does it fundamentally differ from traditional chatbots or rule-based automation?
Agentic AI refers to autonomous systems capable of perceiving their environment, making context-driven decisions, and executing complex tasks with minimal human input. Unlike traditional chatbots or rule-based automation, which operate based on predefined scripts, Agentic AI dynamically adapts, learns from outcomes, and orchestrates multistep processes. It functions more like a proactive digital collaborator than a reactive tool.
Beyond task automation, what complex, multi-step problems are Agentic AIs uniquely positioned to solve that current AI models cannot?
Agentic AI excels in scenarios that require goal-setting, prioritization, and adaptive iteration. In enterprise IT optimization, for example, it can automatically right-size infrastructure across hybrid environments using real-time workload analysis. In supply chain orchestration, it can autonomously manage disruptions by rerouting logistics and reallocating resources. In customer operations, it can monitor support trends, escalate critical issues, and proactively suggest workflow improvements. While traditional AI typically requires human or system-level orchestration, Agentic AI performs this orchestration independently.
How much autonomy should Agentic AI have in decision-making, and where should humans remain in the loop?
The goal is not to replace humans but to enhance decision-making. The level of autonomy should correspond with the risk and criticality of the task. For low-risk, repetitive activities such as patch management or report generation, full autonomy is appropriate. In contrast, high-risk or regulated areas like financial approvals or healthcare decisions should involve AI-generated recommendations that humans ultimately validate or approve. Establishing governance frameworks, auditability, and explainability is essential to maintain trust and accountability.
What infrastructure challenges (such as power, cooling, latency) arise when deploying Agentic AI at scale?
Scaling Agentic AI introduces several infrastructure challenges. High-performance computing, especially using GPU-optimized environments, is essential. Nutanix NAI (Nutanix Enterprise AI) addresses this by providing a scalable, GPU-accelerated platform that integrates seamlessly with NVIDIA NIM and leading LLM frameworks. To reduce latency in real-time use cases such as retail or manufacturing, edge-ready architecture is critical.
Nutanix Edge Clusters are designed for this purpose, offering low-latency processing, data sovereignty, and unified management across distributed sites. Optimizing power and cooling through intelligent workload placement across hybrid environments is another key requirement—something platforms like Nutanix NC2 and GPT-in-a-Box are already equipped to manage. Additionally, sustainability goals and compliance requirements, particularly in regions like MENA, necessitate thoughtful infrastructure strategies to ensure data localization and operational efficiency.
What’s the next frontier for Agentic AI? Will we see AI “agents” collaborating like human teams?
Absolutely. We are already seeing early signs of this through multi-agent systems that specialize in distinct domains such as planning, execution, or compliance. These agents collaborate through shared memory and common goals, and they can negotiate roles in real time. The future of Agentic AI will not be defined by a single system replacing a human, but by a network of AI agents working together alongside human teams, each contributing to more rapid and effective outcomes.
Will open-source models keep pace with proprietary Agentic AI systems, or will there be a widening gap?
Open-source models will remain highly relevant, particularly in privacy-sensitive or regulated sectors like government, and in cost-sensitive markets where vendor lock-in poses concerns. However, proprietary platforms are likely to advance faster in areas such as performance tuning, seamless integration, and enterprise-grade support. The key differentiator will be interoperability.
Platforms capable of flexibly integrating both open and proprietary models will lead the way. Nutanix NAI exemplifies this approach—it supports open frameworks like Hugging Face, integrates with NVIDIA NIM, and runs AI workloads across on-premises, public cloud, and edge environments. This allows enterprises to mix and match models, tools, and infrastructure without compromise.