Create test cases and datasets
A dataset is a collection of conversations used to evaluate your agents. We allow manual test creation for fine-grained control, but since generative AI agents can encounter an infinite number of test cases, automated test case generation is often necessary, especially when you don’t have any test conversations to import.
In this section, we will walk you through how to create test cases and datasets using the Hub interface. In general, we cover five different ways to create datasets:
Design your own test cases using a full control over the test case creation process and explore them in the playground.
Import existing test datasets from a JSONL or CSV file, obtained from another tool, like Giskard Open Source.
Detect security failures, by generating synthetic test cases to detect security failures, like stereotypes & discrimination or prompt injection, using adversarial queries.
Detect business failures, by generating synthetic test cases to detect business failures, like hallucinations or denial to answer questions, using document-based queries and knowledge bases.
Tip
For advanced automated discovery of weaknesses such as prompt injection or hallucinations, check out our Vulnerability Scanner, which uses automated agents to generate tests for common security and robustness issues.