EnterpriseNewsSecurity

Outpost24 Launches AI Pentesting Amid Rapid Enterprise AI Adoption

As AI models become increasingly capable of autonomously identifying software vulnerabilities, their rapid integration into enterprise systems is outpacing the development of security measures tailored to AI-specific risks. Outpost24 has introduced AI Pentesting, an expert-led adversarial testing service designed to help mid-to-large enterprises identify and address security gaps in AI-powered environments before they can be exploited.

The service builds on the company’s existing penetration testing capabilities, supported by more than five years of CREST-certified expertise and aligned with OffSec’s AI-300 Advanced AI Red Teaming methodology. Its launch coincides with growing regulatory scrutiny, including the EU AI Act entering its implementation phase in 2026 and frameworks such as the NIST AI Risk Management Framework raising expectations around AI security compliance.

The emergence of frameworks like the OWASP Top 10 for LLM Applications has highlighted new categories of vulnerabilities, including prompt injection, data leakage, unsafe outputs and risks within agent-based workflows. Traditional security tools—such as static code analysis, dynamic scanning and API testing—are not designed to assess how large language models behave, reason or interact with external systems and sensitive data, creating a significant security gap for organizations adopting AI at scale.

AI Pentesting addresses this challenge by applying human-led validation techniques adapted to the complexity of AI systems. The approach covers the full AI attack surface, including models, prompts, retrieval-augmented generation pipelines, agent workflows, and connected APIs. The process begins with system mapping, followed by adversarial testing and contextual validation, before concluding with detailed analysis and reporting.

Unlike automated tools that focus on known vulnerabilities, the service evaluates how AI systems respond under real-world adversarial conditions. Findings are prioritised by severity and accompanied by tailored remediation guidance for AI and LLM environments, delivered through a unified platform alongside other security testing services.

“As organizations embed AI systems and LLMs into customer journeys and internal workflows, they create a new attack surface that traditional application testing was not built to measure. Prompt injection, sensitive data exposure, and unsafe agent behavior are only part of the picture. Closing that gap requires adversarial testing that treats AI behavior as part of the security boundary,” said Omri Kletter, Chief Product Officer at Outpost24.

“We are seeing a pattern that should concern every security leader: AI systems deployed with implicit trust in their inputs, minimal access controls between models and internal infrastructure, and zero adversarial testing before production. Twenty years ago, we learned these lessons the hard way with web applications. The difference now is that the barrier to exploitation is dramatically lower, because an LLM can be manipulated through natural language rather than crafted code,” said Martin Jartelius, AI Product Director at Outpost24.

Show More

Chris Fernando

Chris N. Fernando is an experienced media professional with over two decades of journalistic experience. He is the Editor of Arabian Reseller magazine, the authoritative guide to the regional IT industry. Follow him on Twitter (@chris508) and Instagram (@chris2508).

Related Articles

Back to top button