Skip to content
GitHubDiscord

Overview

Comprehensive guide to AI security vulnerabilities and attack patterns tested by Giskard’s vulnerability scan.

The vulnerability scan uses specialized probes (structured adversarial tests) to stress-test AI systems and uncover weaknesses before malicious actors do. Each probe is designed to expose specific vulnerabilities in AI agents, from harmful content generation to unauthorized system access.

This catalog organizes vulnerabilities by risk category and provides detailed information about:

  • Attack patterns and techniques
  • Specific probes used for testing
  • Detection indicators
  • Mitigation strategies
  • Risk levels and business impact

Use this guide to understand the security landscape for AI systems and make informed decisions about which vulnerabilities to prioritize in your testing.