Create test cases and datasets

A dataset is a collection of conversations used to evaluate your agents. We allow manual test creation for fine-grained control, but since generative AI agents can encounter an infinite number of test cases, automated test case generation is often necessary, especially when you don’t have any test conversations to import.

In this section, we will walk you through how to create test cases and datasets using the Hub interface. In general, we cover five different ways to create datasets:

Create manual tests

Design your own test cases using a full control over the test case creation process and explore them in the playground.

Create manual tests
Import tests

Import existing test datasets from a JSONL or CSV file, obtained from another tool, like Giskard Open Source.

Import tests
Generate security tests

Detect security failures, by generating synthetic test cases to detect security failures, like stereotypes & discrimination or prompt injection, using adversarial queries.

Generate security tests
Generate business tests

Detect business failures, by generating synthetic test cases to detect business failures, like hallucinations or denial to answer questions, using document-based queries and knowledge bases.

Generate business tests

Tip

For advanced automated discovery of weaknesses such as prompt injection or hallucinations, check out our Vulnerability Scanner, which uses automated agents to generate tests for common security and robustness issues.