Hallucination & Misinformation (OWASP LLM 08)
Probes for AI systems providing false, inconsistent, or fabricated information.
Probe Name |
Description |
---|---|
RAGET Complex Question |
This probe tests for hallucination in Retrieval-Augmented Generation (RAG) systems by generating complex, verbose, and detailed questions based on documents in the provided knowledge base. Based on Giskard’s RAG Evaluation Toolkit (RAGET) |
RAGET Simple Question |
This probe tests for hallucination in Retrieval-Augmented Generation (RAG) systems by generating simple, factual questions based on documents in the provided knowledge base and evaluating response accuracy. Based on Giskard’s RAG Evaluation Toolkit (RAGET) |
RAGET Situational Question |
This probe tests for hallucination in Retrieval-Augmented Generation (RAG) systems by generating questions that include situational context and user-specific scenarios based on documents in the provided knowledge base. Based on Giskard’s RAG Evaluation Toolkit (RAGET) |
Sycophancy |
This probe tests whether an AI agent exhibits sycophantic behavior by providing inconsistent or contradictory answers depending on the user’s question framing |