Taking a Level-Headed Approach to AI

Ramprakash Ramamoorthy, the Director for AI Research at Zoho & ManageEngine, says, not every problem requires an LLM; sometimes, a smaller machine learning model can solve the problem, with an LLM as a wrapper for natural language interaction
Let’s start with a brief about the trends you foresee in AI for this year. How is AI helping companies?
I think this year will be the year of agents. Last year, we were talking a lot about large language models (LLMs), but despite business processes being digital and happening on cloud tools, LLMs weren’t able to make a significant impact in the enterprise. The reason behind that is LLMs are really useful for semi-structured or unstructured information. For example, they can look at a call transcript and generate insights, or analyze images and understand their content. However, enterprises have a lot of structured information, and LLMs didn’t work very well with that.
Now, agents are becoming the bridge between LLMs and structured information, making it easier for LLMs to integrate into the enterprise. For instance, if you ask an LLM who the President of the United States is, multiply their age by 80, and give you the result, a standalone LLM might not be able to do that accurately, especially if its training data is outdated. But with agents, it can look up the current president, their age, and use a calculator function to perform the multiplication.
In the enterprise, agents can interact with systems like ticketing systems to retrieve real-time data, such as the number of open tickets. This year, we will see a lot of agentic platforms evolve, and enterprises will start building use cases using LLMs on top of agents.
So, there are changes in the AI industry and technology. How can companies adapt their strategies around this?
At ManageEngine, we’ve seen a 50% increase in AI usage on our cloud products from 2023 to 2024. This indicates that companies are taking AI seriously, moving from marketing to sales conversations, and integrating AI into workflows. However, the basics remain important. It’s crucial to streamline data and processes so that AI can analyze cohesive information across departments. AI doesn’t work well with data silos or access controls, but privacy is also super important. You can’t train agents on data from other companies or even across departments within the same company.
My general advice is to focus on solving big problems with 80% accuracy using AI. Deploying AI in these areas will yield better benefits than trying to solve 100% accuracy problems, where AI often falls short. So, streamline your data and processes, and find the right problems to deploy AI into.
What sort of AI roadmap do you have, and how does it align with the changes in the industry?
We recently announced the Agentic Studio under our umbrella company, Zoho Corp, where we will pre-build a lot of agents, and customers can build their own agents and deploy them in a marketplace. We are integrating more AI agents into our roadmap and enhancing our model strategy by right-sizing AI models. Not every problem requires an LLM; sometimes, a smaller machine learning model can solve the problem, with an LLM as a wrapper for natural language interaction.
We’ve deployed open-source LLMs in our cloud so that customer data doesn’t leave the ecosystem. We’re also building our own foundational model, expected to launch later this year. Despite many open-source models available, we want to build our own because we play the long game and want to gain the know-how in this critical technology.
Can you tell us about adaptable AI and the role it plays in dynamic environments?
Traditional machine learning models don’t work well beyond the data they’ve been trained on. However, LLMs exhibit emergent behavior, meaning they can understand and adapt to new scenarios they weren’t explicitly trained on. This was initially seen only in larger models, but now even smaller models, like those with 50 million parameters, are showing this behavior.
This makes AI more adaptable to unexpected data. When new information flows in, the model can adapt to it, which is crucial for companies looking to deploy AI in dynamic environments.
How are you helping companies overcome challenges in deploying AI?
Many modern SaaS companies act as glorified resellers of compute power and AI, passing the cost on to customers. We take a different approach. Over a decade ago, we invested in building our own data centers, which has become a key differentiator for us. Owning infrastructure helps reduce costs and offer better value to customers.
Similarly, we don’t want to be just resellers of AI. We have our homegrown AI stack and continue to invest in it. We’re not chasing the hype; we didn’t rebrand ourselves as an AI company overnight. We strike a balance between what benefits our customers immediately and what will be relevant in the long term. We take a level-headed approach to AI, ensuring we don’t miss the train while keeping the hype in check.