Artificial IntelligenceNewsSecurity

“Shadow AI” Risks Soar as GenAI Adoption Skyrockets in Enterprises

Enterprise use of generative AI (genAI) platforms has surged by 50% in the last three months ending May 2025, according to new research released today by Netskope, a leader in modern security and networking. Despite efforts towards safely enabling SaaS genAI applications and AI agents, the proliferation of “shadow AI”—unsanctioned AI applications used by employees—continues to amplify potential security risks, with over half of all current app adoption estimated to be shadow AI.

The findings are detailed in Netskope Threat Labs’ latest Cloud and Threat Report, which examines the evolving landscape of employee adoption of genAI platforms, whether cloud-delivered or on-premises. This trend, coupled with widespread interest in developing AI apps and autonomous agents, presents a new array of cybersecurity challenges for enterprises.

GenAI platforms, serving as foundational infrastructure for building custom AI apps and agents, are identified as the fastest-growing category of shadow AI due to their user-friendliness and flexibility. In the three months leading up to May 2025, the number of users on these platforms increased by 50%.

This surge in popularity expedites direct connections between enterprise data stores and AI applications, creating new data security risks that underscore the importance of data loss prevention (DLP) and continuous monitoring. Network traffic related to genAI platform usage also jumped by 73% over the preceding three-month period. By May, 41% of organizations were already utilizing at least one genAI platform, with Microsoft Azure OpenAI leading at approximately 29%, followed by Amazon Bedrock (22%), and Google Vertex AI (7.2%).

“The rapid growth of shadow AI places the onus on organizations to identify who is creating new AI apps and AI agents using genAI platforms and where they are building and deploying them,” said Ray Canzanese, Director of Netskope Threat Labs. “Security teams don’t want to hamper employee end users’ innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organizations need to overhaul their AI app controls and evolve their DLP policies to incorporate real-time user coaching elements.”

Organizations are exploring diverse avenues for rapid AI innovation, including deploying genAI locally via on-premises GPU sources and developing on-premises tools that interact with SaaS genAI applications or platforms. Large Language Model (LLM) interfaces are increasingly becoming a popular choice, with 34% of organizations currently using them. Ollama is the dominant leader in this space with 33% adoption, while others like LM Studio (0.9%) and Ramalama (0.6%) are just beginning to emerge.

Employee engagement with AI tools and marketplaces is also expanding rapidly, with 67% of organizations seeing users download resources from Hugging Face. The allure of AI agents is a significant driver, as evidenced by a critical mass of users across organizations building AI agents and leveraging agentic AI features in SaaS solutions. GitHub Copilot is now in use in 39% of organizations, and 5.5% have users running agents generated from popular AI agent frameworks on-premises. Furthermore, on-premises agents are retrieving more data from SaaS services by accessing a broader range of API endpoints beyond browsers; two-thirds (66%) of organizations have users making API calls to api.openai.com, and 13% to api.anthropic.com.

Netskope is now tracking over 1,550 distinct genAI SaaS applications, a significant leap from just 317 in February, highlighting the rapid pace of new app releases and enterprise adoption. Organizations are currently using approximately 15 genAI apps, up from 13 in February, and the monthly volume of data uploaded to genAI apps has increased from 7.7 GB to 8.2 GB quarter-over-quarter.

Enterprise users are showing a trend towards consolidating around purpose-built tools such as Gemini and Copilot, as security teams increasingly work to safely enable these applications within their businesses due to better integration into productivity suites. Notably, general-purpose chatbot ChatGPT has experienced its first decrease in enterprise popularity since Netskope began tracking it in 2023.

Of the top 10 most popular genAI apps per organization, only ChatGPT saw a decline since February, while others including Anthropic Claude, Perplexity AI, Grammarly, and Gamma all gained enterprise adoption. Grok has also entered the top 10 most-used applications for the first time, and while it remains on the most-blocked apps list, its blockage rates are trending downward as more organizations evaluate granular controls and monitoring.

To ensure safe and responsible adoption amid the accelerated usage of various genAI technologies, Netskope recommends that CISOs and other security leaders take the following steps:

  • Assess the genAI landscape: Identify which genAI tools are in use, who is using them, and how they are being leveraged within the organization.
  • Bolster genAI app controls: Establish and enforce policies that only permit the use of company-approved genAI applications, implement robust blocking mechanisms, and deploy real-time user coaching.
  • Inventory local controls: For organizations running local genAI infrastructure, review and apply relevant security frameworks such as OWASP Top 10 for Large Language Model Applications, ensuring adequate protection for data, users, and networks interacting with local genAI infrastructure.
  • Continuous monitoring and awareness: Implement continuous monitoring of genAI use to detect new shadow AI instances and stay updated on developments in AI ethics, regulatory changes, and adversarial attacks.
  • Assess the emerging risks of agentic shadow AI: Identify key adopters of agentic AI and collaborate with them to develop actionable and realistic policies to limit shadow AI.
Show More

Related Articles

Back to top button