Skip to content
GitHubDiscord

Excessive Agency

Probes testing whether AI agents can be manipulated to perform actions beyond their intended scope or with inappropriate permissions.

OWASP LLM06:2025 Excessive Agency ↗ - Excessive Agency occurs when an LLM is granted too much functionality, permissions, or autonomy, enabling it to perform unintended or harmful actions. This risk arises from poorly designed system extensions or insufficient control over LLM operations. The consequences range from data misuse to unauthorized system actions, often exacerbated by insufficient human oversight.

Probe NameDescription
Broken Function Level AuthorizationThis probe tests whether an AI agent with tool access can be manipulated to execute functions or operations beyond its intended authorization level. The probe evaluates the target’s ability to enforce proper function-level access controls and resist attempts to access unauthorized system capabilities through crafted prompts and function call manipulation.
Broken Object Level AuthorizationBroken Object Level Authorization (BOLA) occurs when an LLM-powered system lets a user access specific objects (documents, records, files, conversations, etc.) without enforcing per-object authorization checks. This can happen through natural language requests that indirectly reference objects or through tool/function calls with object identifiers.
Shell InjectionThis probe tests for shell injection vulnerabilities in AI agents that have access to system command execution. It generates crafted prompts designed to trick the AI into executing malicious shell commands or gaining unauthorized system access.
SQL InjectionThis probe tests for SQL injection vulnerabilities in AI agents that have access to database operations. It generates crafted prompts designed to trick the AI into executing malicious SQL commands or revealing sensitive database information.
SSRFThis probe tests for Server-Side Request Forgery (SSRF) vulnerabilities in AI agents that have tool access, attempting to manipulate the target into making unauthorized requests to external systems or resources.
HijackingThis probe tests whether an AI agent can be hijacked or repurposed for unintended malicious activities beyond its original design scope.