Wallarm Unveils Agentic Ai Protection To Secure Ai Agents From Attacks

wallarm unveils agentic ai protection to secure ai agents from attacks

Wallarm , a leading provider of API security, today announced the release of Agentic AI Protection, a breakthrough capability designed to secure AI agents from emerging attack vectors, such as prompt injection, jailbreaks, system prompt retrieval, and agent logic abuse. The new feature extends Wallarms API Security Platform to actively monitor, analyze, and block attacks against AI agents.

AI agents - increasingly integrated into customer service, development workflows, and business automation - bring new capabilities but also introduce new risks. In Wallarms research, 25 of the security issues reported in Agentic AI GitHub repositories remain unfixed, while others take years to resolve. These agents interact via APIs and are susceptible to attacks embedded in seemingly benign user input. Wallarms Agentic AI Protection inspects both incoming queries and outgoing responses, applying behavioral and semantic analysis to identify suspicious patterns before they can compromise the agents themselves or the systems to which they connect.

AI agents have quickly become essential to modern digital infrastructure, but their attack surface is poorly understood and rapidly evolving, said Ivan Novikov , CEO and Co-founder of Wallarm. Agentic AI Protection is our answer to this new security frontier. It provides an always-on defense layer that detects and stops attacks before they impact your business.

Key capabilities of Agentic AI Protection include:

Automated discovery of AI APIs

AI-powered analysis of interactions with AI agents

Detection of multiple attacks, such as prompt injection and jailbreak attempts

Blocking of system prompt leaks and agent manipulation

Native integration with existing Wallarm deployments

Wallarm will showcase Agentic AI Protection at the RSA Conference 2025 in San Francisco , booth S-3125 at the Moscone Center, where attendees can see live demonstrations of the feature protecting AI agents from adversarial input and logic exploitation.