Create test cases and datasets

This section will guide you through creating your own test datasets programmatically.

A dataset is a collection of chat test cases (conversations) used to evaluate your agents. We allow manual test creation for fine-grained control, but since generative AI agents can encounter an infinite number of scenarios, automated test case generation is often necessary, especially when you don’t have any chat transcripts to import.

Create manual tests

Create manual test cases using the hub.datasets.create() and hub.chat_test_cases.create() methods.

Create manual tests
Generate security tests

Detect security failures, by generating synthetic test cases to detect security failures, like stereotypes & discrimination or prompt injection, using adversarial queries.

Generate security tests
Generate business tests

Detect business failures, by generating synthetic test cases to detect business failures, like hallucinations or denial to answer questions, using document-based queries and knowledge bases.

Generate business tests
Import tests

Import existing test datasets from a JSONL or CSV file, obtained from another tool, like Giskard Open Source.

Import tests

Tip

For advanced automated discovery of weaknesses such as prompt injection or hallucinations, check out our Vulnerability Scanner, which uses automated agents to generate tests for common security and robustness issues.