Artificial IntelligenceInterviews

Autonomous Future: Agentic AI’s Impact and Challenges

Maryna Bautina, Senior AI Consultant at SoftServe, explores the evolving landscape of Agentic AI. She explains the crucial balance of AI autonomy and human oversight, infrastructure challenges at scale including power, cooling, and latency, and the exciting prospect of AI “agents” collaborating like human teams in the future

How do you define Agentic AI, and how does it fundamentally differ from traditional chatbots or rule-based automation?
Agentic AI is an autonomous software entity that takes an overarching goal or objective, breaks it down into tasks, decides which tools or external services to use, monitors its own progress, and iteratively adjusts its plan until the goal or objective is met. In short, it “owns” the outcome instead of just producing a single reply.

Traditional chatbots merely generate one-off responses to prompts, while rule-based systems follow a set of instructions when something specific happens. Neither can break a task into parts, learn from what’s working and what’s not working, or fix mistakes as it goes. Agentic AI can do all of this because it’s built to keep working, adapting, and thinking over time—capabilities that legacy bots and scripted workflows simply lack.

Agentic AI is like asking a smart travel assistant to book a five-day beach holiday next month within a specific budget; the AI will hunt deals, reserve flights and hotels, monitor weather and prices, and reshuffle plans automatically if something goes wrong—all while sticking with the goal you set. A regular chatbot would just list options when asked and the rule-based alert system would ping you if prices were to drop, but neither would keep working on the whole trip end-to-end.

Beyond task automation, what complex, multi-step problems are agentic AIs uniquely positioned to solve that current AI models cannot?
Agentic AIs excel at complex, long-term problems, especially if they require constant adjustments. A good example of this is end-to-end drug discovery; the process involves coming up with potential compounds, running tests, reviewing lab results, redesigning molecules, and managing paperwork until a viable drug emerges. Real-time supply chain orchestration is another area of opportunity to use agentic AI when forecasting demand, reserving factory time, modifying shipping routes, and adjusting store prices.

Other examples include delivering personalized K-12 tutoring that adapts weekly to a student’s needs, balancing an energy grid as weather shifts, or coordinating disaster relief by reading satellite images, assigning rescue teams, ordering supplies, and continuously update routes as roads reopen.

Regular chatbots and generative AI can describe or recommend steps in those domains, but only agentic AI system can autonomously carry them out in sequence, respond to real-world feedback, and keep iterating until the mission is truly finished.

How much autonomy should Agentic AI have in decision-making, and where should humans remain in the loop?
Agentic AI should be granted autonomy in proportion to the risk, reversibility, and judgment involved in each decision.

For routine, low-risk tasks—like updating a team meeting schedule or reassigning internal IT tickets—full autonomy makes sense. Mistakes in these cases are easy to detect and fix. Where decisions carry financial, legal, safety, or ethical weight, such as approving an applicant for a mortgage loan, adjusting payroll across departments, or determining surgical procedures as part of a treatment plan—humans must stay in the loop, reviewing the AI’s suggested plan before execution and monitoring the results as they happen in real-time, with the ability to step in if needed.

For the most sensitive or high-impact decisions, humans should remain “in the loop.” In these cases, the AI can suggest options, but the final decision should rest with a human.

What infrastructure challenges (such as power, cooling, latency) arise when deploying Agentic AI at scale?
Deploying agentic AI at scale introduces serious infrastructure challenges, including power consumption, cooling demands, network latency, storage limits, and sustainability compliance. Unlike traditional AI models that run in short bursts, agentic workloads are continuous, stateful, and coordination-heavy, putting pressure on every layer of the data center.

Graphics Processing Units (GPUs) chips now run constantly, pulling steady megawatts of power. Just to keep up and to avoid utility overage penalties, operators are adding extra power lines, solar panels, and batteries. The chips also generate heat no longer manageable with air cooling alone, making liquid or “immersion” cooling the new standard.

Since these AI systems hold memory over time and are constantly processing and generating new information, they must stay on the same machines, save their progress often, and receive regular back-ups across multiple locations. Networks are flooded with perpetual messages, so higher-speed connections and local caching are required to keep everything running smoothly. On top of all that, stricter energy and water rules mean compliance with environmental limits is now a must, not a nice-to-have.

What’s the next frontier for Agentic AI—will we see AI “agents” collaborating like human teams?
The next leap is multi-agent orchestration, where instead of relying on a single, general-purpose assistant, you spin up a team of specialized AI agents, specifically a sourcing agent, pricing agent, and logistics agent. These agents would message each other through frameworks like Microsoft AutoGen or CrewAI, dividing up the work, weighing trade-offs, and only involving human managers in major decisions.

Early pilot programs already show these “swarms” of AI agents successfully rerouting freight during port closures and renegotiating contracts in minutes. New tools releases in 2025 like OpenAI’s new Agents Software Development Kit (SDK) and LangGraph) bakes in shared memory, role hierarchies, and conflict-resolution protocols, allowing AI teammates to collaborate much like a real project team, while keeping humans in the approval loop.

Will open-source models keep pace with proprietary Agentic AI systems, or will there be a widening gap?
Open-source AI stacks are advancing quickly. For most agentic workloads, models like Meta’s Llama 4 paired with orchestration tools like CrewAI and LangGraph, now deliver near GPT-4-level quality on many tasks at a fraction of the cost. Proprietary vendors like GPT-4 and Gemini still lead frontier tasks, offering larger context windows, fresher data, and deeper integration, but that edge comes with closed access and high training costs.

The gap at the cutting edge may grow, but for everyday use, it shrinks. Enterprises needing top-tier accuracy or compliance will pay for proprietary agents. Everyone else will run open models that iterate in public and catch up within a release or two.

Show More

Chris Fernando

Chris N. Fernando is an experienced media professional with over two decades of journalistic experience. He is the Editor of Arabian Reseller magazine, the authoritative guide to the regional IT industry. Follow him on Twitter (@chris508) and Instagram (@chris2508).

Related Articles

Back to top button