Skip to content
Logo LogoGiskard documentation
Getting Started Hub UI Hub SDK Open Source
⌘ K
Logo LogoGiskard documentation
Getting Started Hub UI Hub SDK Open Source

Getting Started

  • Welcome to Giskard
  • Open Source vs Hub
  • Request your enterprise trial
  • Knowledge Glossary
    • Agent evaluation and testing methodologies
    • Business Failures
      • Addition of Information
      • Business Out of Scope
      • Denial of Answers
      • Hallucination & Misinformation
      • Moderation Issues
      • Omission
    • Security Vulnerabilities
      • Output Formatting Issues
      • Harmful Content Generation
      • Information Disclosure
      • Prompt Injection
      • Robustness Issues
      • Stereotypes & Discrimination
    • LLM Benchmarks
      • Reasoning and Language Understanding Benchmarks
      • Mathematical Reasoning Benchmarks
      • Programming Benchmarks
      • Conversation and Chatbot Benchmarks
      • Safety Benchmarks
      • Domain-Specific Benchmarks
  • Contact us
  • Blogs

Giskard Hub UI

  • Quickstart & setup
  • AI Vulnerability Scan
    • Launch a red teaming scan
    • Review scan results
    • Vulnerability Categories
      • Prompt Injection (OWASP LLM 01)
      • Harmful Content Generation
      • Hallucination & Misinformation (OWASP LLM 08)
      • Excessive Agency (OWASP LLM 06)
      • Data Privacy Exfiltration (OWASP LLM 05)
      • Internal Information Exposure (OWASP LLM 01-07)
      • Training Data Extraction (OWASP LLM 02)
      • Denial of Service (OWASP LLM 10)
      • Brand Damage & Reputation
      • Legal & Financial Risk
      • Misguidance & Unauthorized Advice
  • Create test datasets
    • Manual test creation for fine-grained control
    • Import Existing Datasets
    • Detect security vulnerabilities by generating synthetic tests
    • Detect business failures by generating synthetic tests
  • Review tests with human feedback
  • Run and schedule evaluations
  • Compare evaluation results
  • Continuous red teaming
  • Set access rights

Giskard Hub SDK

  • Quickstart & setup
  • Manage your projects, agents and knowledge bases
  • Create test datasets
    • Detect security vulnerabilities by generating synthetic tests
    • Detect business failures by generating synthetic tests
    • Import existing datasets
  • Manage your checks
  • Run and schedule evaluations
  • API reference
    • Client reference
    • Entities reference
    • Resources reference
  • GitHub

Giskard Open Source

  • Quickstart & setup
  • Detect security vulnerabilities in LLMs using LLM Scan
  • Detect business failures in LLMs using RAGET
  • Tabular ML, Predictive NLP, and Computer Vision
  • Example notebooks
    • RAG Evaluation Toolkit on an IPCC Climate Agent
    • RAG Evaluation Toolkit on a Banking Supervisory Process Agent
    • LLM Question Answering over the IPCC Climate Change Report
    • LLM Question Answering with Langchain, Qdrant and OpenAI
    • LLM Question Answering over the 2022 Winter Olympics Wikipedia articles
    • LLM product description from keywords
    • LLM Newspaper Comments Generation with LangChain and OpenAI
    • LLM Question Answering over the documentation with Langchain, FAISS and OpenAI
  • API reference
    • Models reference
    • Datasets reference
    • LLM scan reference
      • Scan report reference
      • LLM detectors reference
    • LLM tests
    • Test suite reference
    • RAG Evaluation Toolkit reference
      • Testset generation reference
      • Knowledge base reference
      • Evaluation reference
      • Question generation reference
      • Evaluation metrics reference
  • GitHub
Giskard documentation
/
AI Vulnerability Scan
/
Vulnerability Categories
/
Legal & Financial Risk

Legal & Financial Risk

Probes targeting potential legal and financial liabilities.

Liability & Commitment Tests

Probe Name

Description

Liability Engagement

This probe tests whether an AI agent can be influenced to make commitments, agreements, or statements that could engage the company’s legal or financial liability beyond its intended scope of operation

Brand Damage & Reputation
Misguidance & Unauthorized Advice

On this page

  • Liability & Commitment Tests

© 2025, Giskard Built with Sphinx 8.1.3