Static test inputs cover known cases β dynamic inputs let your scenarios adapt
to what the system actually says. This tutorial shows you how to make both
inputs and outputs context-aware using callables that read from the trace.
The most common reason to switch from a static string to a callable is that you
want the scenario to exercise your real model instead of a pre-written response.
Pass a callable to outputs to call your function at run time:
The lambda receives the current interactionβs inputs string and must return
the output value. At run time the framework evaluates it and stores the return
value in the trace, exactly as it would a hard-coded string.
defmy_model(user_message:str)->str:
# Your chatbot, agent, or any callable
returnf"Echo: {user_message}"
scenario =(
Scenario("dynamic_output")
.interact(
inputs="Tell me your name.",
outputs=lambdainputs:my_model(inputs),
)
.check(
FnCheck(fn=
lambdatrace:len(trace.last.outputs)>0,
name="non_empty_response",
)
)
)
result =await scenario.run()
result.print_report()
Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββ β PASSED ββββββββββββββββββββββββββββββββββββββββββββββββββββnon_empty_responsePASSββββββββββββββββββββββββββββββββββββββββββββββββββββββ Trace ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Interaction 1 ββββββββββββββββββββββββββββββββββββββββββββββββββ
Inputs: 'Tell me your name.'
Outputs: 'Echo: Tell me your name.'ββββββββββββββββββββββββββββββββββββββββββββββββββ 1 step in 1ms ββββββββββββββββββββββββββββββββββββββββββββββββββ
Next, weβll tackle the second common need: making the input to turn 2 depend on
what the system said in turn 1. Pass a callable to inputs to build the second
turnβs input from the first turnβs output:
The inputs callable receives the Trace object accumulated so far, so you can
read any previous interaction via trace.interactions[i] or the shorthand
trace.last.
scenario =(
Scenario("echo_followup")
.interact(
inputs="My favourite colour is blue.",
outputs=lambdainputs:my_model(inputs),
)
.interact(
# inputs callable receives the full trace
inputs=lambdatrace:f"You said: {trace.last.outputs}. Is that right?",
With dynamic outputs and dynamic inputs covered separately, you can now combine
both in a single scenario. Here is a two-turn conversation where turn 2βs input
depends on turn 1βs output, and both turns call a live function:
defchatbot(message:str)->str:
responses ={
"start":"I have opened ticket #42 for you.",
"default":"Got it. I will look into that.",
}
if"start"in message.lower():
returnresponses["start"]
returnresponses["default"]
scenario =(
Scenario("ticket_followup")
# Turn 1: fixed input, live output
.interact(
inputs="Please start a new support ticket.",
outputs=lambdainputs:chatbot(inputs),
).check(
FnCheck(fn=
lambdatrace:"#42"in trace.last.outputs,
name="ticket_id_present",
success_message="Ticket ID returned",
failure_message="No ticket ID in response",
)
)
# Turn 2: input built from turn 1's output
.interact(
inputs=lambdatrace:(
f"I got '{trace.last.outputs}'. "
"Can you add a note to that ticket?"
),
outputs=lambdainputs:chatbot(inputs),
)
)
result =await scenario.run()
result.print_report()
Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββ β PASSED ββββββββββββββββββββββββββββββββββββββββββββββββββββticket_id_presentPASSββββββββββββββββββββββββββββββββββββββββββββββββββββββ Trace ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Interaction 1 ββββββββββββββββββββββββββββββββββββββββββββββββββ
Inputs: 'Please start a new support ticket.'
Outputs: 'I have opened ticket #42 for you.'ββββββββββββββββββββββββββββββββββββββββββββββββββ Interaction 2 ββββββββββββββββββββββββββββββββββββββββββββββββββ
Inputs: "I got 'I have opened ticket #42 for you.'. Can you add a note to that ticket?"
Outputs: 'Got it. I will look into that.'βββββββββββββββββββββββββββββββββββββββββββββββββ 2 steps in 0ms ββββββββββββββββββββββββββββββββββββββββββββββββββ
At run time the framework:
Calls chatbot("Please start a new support ticket.") and stores the output
in the trace.
Evaluates the inputs lambda with the current trace β the string it returns
becomes turn 2βs input.
Calls chatbot(...) with that input and stores the second output.
The scenario above runs both turns but does not assert anything about turn 2βs
output. Add a .check() after the dynamic turn to validate the context-aware
output:
The check runs after all interactions complete, so trace.last always refers to
the final turn. If you need to assert on an earlier turn, use
trace.interactions[0] to address it directly.
from giskard.checks import StringMatching
scenario =(
Scenario("ticket_followup_with_check")
.interact(
inputs="Please start a new support ticket.",
outputs=lambdainputs:chatbot(inputs),
)
.interact(
inputs=lambdatrace:(
f"I got '{trace.last.outputs}'. "
"Can you add a note to that ticket?"
),
outputs=lambdainputs:chatbot(inputs),
)
.check(
StringMatching(
name="acknowledgement",
keyword="Got it",
text_key="trace.last.outputs",
)
)
)
result =await scenario.run()
result.print_report()
Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββ β PASSED ββββββββββββββββββββββββββββββββββββββββββββββββββββacknowledgementPASSββββββββββββββββββββββββββββββββββββββββββββββββββββββ Trace ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Interaction 1 ββββββββββββββββββββββββββββββββββββββββββββββββββ
Inputs: 'Please start a new support ticket.'
Outputs: 'I have opened ticket #42 for you.'ββββββββββββββββββββββββββββββββββββββββββββββββββ Interaction 2 ββββββββββββββββββββββββββββββββββββββββββββββββββ
Inputs: "I got 'I have opened ticket #42 for you.'. Can you add a note to that ticket?"
Outputs: 'Got it. I will look into that.'ββββββββββββββββββββββββββββββββββββββββββββββββββ 1 step in 4ms ββββββββββββββββββββββββββββββββββββββββββββββββββ
So far every inputs value has been either a fixed string or a lambda that
reads the current trace. For more advanced use cases β such as generating many
varied user messages automatically β you can pass an input generator
instead.
An input generator is any object that implements the generator protocol: it
receives the trace and returns the next input string. Persona is the built-in
generator that produces LLM-powered user messages from a description:
from giskard.checks.generators.user import UserSimulator
curious_user =UserSimulator(
persona="A curious user who asks detailed follow-up questions.",
)
scenario =(
Scenario("persona_driven")
.interact(
inputs=curious_user,
outputs=lambdainputs:my_model(inputs),
)
)
Each call to the scenario replaces the fixed input string with a message generated by the persona. This is the foundation for simulating users automatically.
You now know how to build scenarios that adapt to previous outputs. Once you
have a collection of scenarios, see how to group them into a reusable suite: