To get started, you need to provide the LLM that will power the simulator.
UserSimulator uses a generator to produce each user turn, so the same model
you use for your checks can also drive realistic user behavior.
UserSimulator uses an LLM to generate realistic user messages. Set a default
generator once, or pass one inline.
defsupport_agent(message:str)->str:
"""Stub support agent for demonstration."""
return"I have located your order #98765. It is currently in transit and will arrive tomorrow. Is there anything else I can help you with?"
With the generator configured, we can now define who the simulated user is. The
persona field acts as a system prompt for the simulator — it describes the
user’s role, goal, and stopping condition. The more specific you are, the more
deterministic and useful the generated conversation will be.
from giskard.checks.generators.user import UserSimulator
customer =UserSimulator(
persona="""
You are a customer trying to track a delayed order.
- Start by asking about order #98765
- Provide your name (Alex) when asked
- Accept any resolution the support agent offers
- Stop when the agent confirms a solution
""",
max_steps=8,
)
max_steps limits how many turns the simulator will generate before stopping.
Now we’ll wire the simulator into the scenario. Passing the UserSimulator as
inputs tells the scenario to call it on each turn rather than using a fixed
string — the scenario handles the loop automatically up to max_steps.
Pass the UserSimulator instance as the inputs argument. The scenario will
call it repeatedly to generate each user turn.
from giskard.checks import Scenario, FnCheck
scenario =(
Scenario("order_tracking")
.interact(
inputs=customer,
outputs=lambdainputs:support_agent(inputs),
)
.check(
FnCheck(fn=
lambdatrace:any(
word in trace.last.outputs.lower()
for word in["resolved","refund","replacement","shipped"]
With the scenario built, run it and iterate over the trace to see the full
conversation the simulator generated. This is especially useful when debugging a
failing check — you can see exactly what the simulated user said at each step.
import asyncio
result = asyncio.run(scenario.run())
# Print every turn
for turn in result.final_trace.interactions:
print(f"User: {turn.inputs}")
print(f"Agent: {turn.outputs}")
print()
Output
User: Hi — I’m checking on order #98765. It shows delayed; can you tell me the current status and when I can expect it to arrive?
Agent: I have located your order #98765. It is currently in transit and will arrive tomorrow. Is there anything else I can help you with?
After the scenario finishes, the simulator writes a UserSimulatorOutput into
the last interaction’s metadata. This tells you whether the user’s stated goal
was achieved — a stronger signal than just checking whether the scenario passed
its checks, because it reflects the simulator’s own evaluation of the
conversation outcome.
from giskard.checks.generators.user import UserSimulatorOutput
With a single persona working, we can now run the same agent against multiple
user types simultaneously. Each persona exercises a different interaction style,
and running them concurrently with asyncio.gather means you get results for
all three in roughly the time it takes to complete one.
Run the same agent against multiple user types to surface persona-specific
failures.
import asyncio
personas =[
(
"impatient",
"You are impatient. Keep messages short. Escalate quickly if not helped.",
),
(
"detailed",
"You are thorough. Ask many follow-up questions before accepting any solution.",
),
(
"confused",
"You are unsure what you need. Describe symptoms, not the actual problem.",
By default the trace prints interactions as raw inputs and outputs. You can
write a simple formatting function to produce a human-readable transcript — for
example, to log a simulated conversation or include it in a test failure message.
For a subclass of Trace, Rich rendering, and how that interacts with
print_report(), see Custom trace types.
defformat_transcript(trace)->str:
"""Format a trace as a human-readable chat transcript."""
lines =[]
for turn in trace.interactions:
lines.append(f"User: {turn.inputs}")
lines.append(f"Agent: {turn.outputs}")
return"\n".join(lines)
result =await(
Scenario("chat_trace_demo")
.interact(
inputs=customer,
outputs=lambdainputs:support_agent(inputs),
)
.run()
)
print(format_transcript(result.final_trace))
Output
User: Hi, I’m checking on order #98765. It’s delayed and I haven’t received any updates — can you tell me the current status?
Agent: I have located your order #98765. It is currently in transit and will arrive tomorrow. Is there anything else I can help you with?