dreadnode.agents
API reference for the dreadnode.agents module.
Agent abstraction for applying tools, event logic, and message state to LLM generation.
Now extends Executor for consistent streaming/tracing patterns.
Args:
name: The name of the agent.description: A brief description of the agent.tags: Tags associated with the agent.label: An optional label for the agent.agent_id: The unique identifier for this agent instance.model: Inference model (generator or identifier).instructions: The agent's core instructions.cache: How to handle cache_control entries on inference messages.tools: Tools the agent can use.tool_mode: The tool calling mode to use.stop_conditions: The logical condition for successfully stopping a run.hooks: Hooks to apply during agent execution.trajectory: Stateful trajectory for this agent.backoff_base_factor
Section titled “backoff_base_factor”backoff_base_factor: float = Config(default=1.0, ge=0)Base factor for exponential backoff: wait = base_factor * 2 ** (attempt - 1).
backoff_jitter
Section titled “backoff_jitter”backoff_jitter: bool = Config(default=True)Whether to add up to backoff_base_factor seconds of random jitter to each wait.
backoff_max_time
Section titled “backoff_max_time”backoff_max_time: float = Config(default=300.0, ge=0)Maximum total seconds to spend retrying transient LLM API errors per step.
backoff_max_tries
Section titled “backoff_max_tries”backoff_max_tries: int = Config(default=8, ge=0)Maximum retries on transient LLM API errors per step. 0 disables retry.
generate_params_extra
Section titled “generate_params_extra”generate_params_extra: dict[str, Any] = Config( default_factory=dict)Extra parameters merged into GenerateParams for every generation (e.g. thinking config).
generation_timeout
Section titled “generation_timeout”generation_timeout: int | None = Config(default=None)Timeout in seconds for each LLM generation call. None = no timeout.
history
Section titled “history”history: list[Message]Get conversation history.
max_steps
Section titled “max_steps”max_steps: int = Config(default=1000, ge=1)Maximum number of generation/tool steps before the agent stops.
reset() -> TrajectoryReset the agent’s internal state.
run( goal: str, *, reset: bool = True, trajectory: Trajectory | None = None,) -> TrajectoryExecute the agent and return the trajectory.
stream
Section titled “stream”stream( goal: str, *, reset: bool = True, trajectory: Trajectory | None = None,) -> t.AsyncIterator[t.AsyncGenerator[AgentEvent, None]]Stream agent execution.
Parameters:
goal(str) –Input message for the agent.reset(bool, default:True) –If True, start new conversation. If False, continue existing. Ignored when trajectory is provided.trajectory(Trajectory | None, default:None) –External trajectory to operate on. When provided the agent’s internal trajectory is left untouched and all events accumulate on the supplied object instead.
task(*, name: str | None = None) -> Task[[str], Trajectory]Convert this agent to a Task for use with Evaluation or Study.
The resulting Task takes a goal string and returns a Trajectory. This is the bridge between Agent and the evaluation/optimization systems.
Parameters:
name(str | None, default:None) –Optional name for the task. Defaults to agent name.
Returns:
Task[[str], Trajectory]–A Task that wraps agent.run().
Example
agent = Agent(name="my_agent", ...)
# Use with Evaluationevaluation = Evaluation( task=agent.as_task(), dataset=[{"goal": "..."}], scorers=[my_scorer],)result = await evaluation.run()
# Use with Studystudy = Study( task_factory=lambda params: agent.with_(**params).as_task(), ...)AgentWarning
Section titled “AgentWarning”Warning raised when an agent is used in a way that may not be safe or intended. ToolMode
Section titled “Warning raised when an agent is used in a way that may not be safe or intended. ToolMode”ToolMode = Literal[ "auto", "api", "xml", "json", "json-in-xml", "json-with-tag", "pythonic",]How tool calls are handled.
auto: The method is chosen based on support (api w/ fallback to json-in-xml).api: Tool calls are delegated to api-provided function calling.xml: Tool calls are parsed in a nested XML format which is native to Rigging.json: Tool calls are parsed as raw name/arg JSON anywhere in assistant message content.json-in-xml: Tool calls are parsed using JSON for arguments, and XML for everything else.json-with-tag: Tool calls are parsed as name/arg JSON structures inside an XML tag to identify it.pythonic: Tool calls are parsed as pythonic function call syntax.
ToolSource
Section titled “ToolSource”ToolSource = Literal[ "builtin", "python", "mcp", "synthetic", "bundled"]The origin of a tool. See CAP-IDENT-001 in specs/capabilities/runtime.md.
Base class for representing a tool to a generator.
catch: bool | Iterable[type[Exception]] = set( DEFAULT_CATCH_EXCEPTIONS)Whether to catch exceptions and return them as messages.
False: Do not catch exceptions.True: Catch all exceptions.set[type[Exception]]: Catch only the specified exceptions.
By default, catches json.JSONDecodeError and ValidationError.
definition
Section titled “definition”definition: ToolDefinitionReturns the tool definition for this tool. This is used for API calls and should be used to construct the tool call in the generator.
description
Section titled “description”description: strA description of the tool.
fn: Callable[P, R] = Field( default_factory=lambda: lambda *args, **kwargs: None, exclude=True,)The function to call.
name: strThe bare tool name. Canonical; never rewritten after construction. See CAP-IDENT-002.
namespace
Section titled “namespace”namespace: tuple[str, ...] = ()Structural namespace path. Empty for built-in and bundled tools;
(cap,) for capability Python tools and synthetic agent-link tools;
(cap, server) for MCP tools. See CAP-IDENT-001.
offload
Section titled “offload”offload: bool = TrueWhether large tool outputs should be offloaded to disk.
parameters_schema
Section titled “parameters_schema”parameters_schema: dict[str, Any]The JSON schema for the tool’s parameters.
source
Section titled “source”source: ToolSource = 'builtin'The tool’s origin. Paired with namespace to determine wire projection.
See CAP-IDENT-001.
truncate
Section titled “truncate”truncate: int | None = NoneIf set, the maximum number of characters to truncate any tool output to.
wire_name
Section titled “wire_name”wire_name: strWire name as emitted to the LLM function-calling API.
Projects structural identity (namespace + name) through the
__ separator rule. Computed fresh on access so post-construction
changes to namespace are respected (see CAP-IDENT-002).
clone() -> Tool[P, R]Create a clone of this tool with the same parameters. Useful for creating tools with the same signature but different names.
handle_tool_call
Section titled “handle_tool_call”handle_tool_call( tool_call: ToolCall,) -> tuple[Message, bool]Handle an incoming tool call from a generator.
Parameters:
tool_call(ToolCall) –The tool call to handle.
Returns:
Message–A tuple containing the message to send back to the generator and abool–boolean indicating whether tool calling should stop.
with_( *, name: str | None = None, description: str | None = None, catch: bool | Iterable[type[Exception]] | None = None, truncate: int | None = None, offload: bool | None = None,) -> Tool[P, R]Create a new tool with updated parameters. Useful for creating tools with the same signature but different names or descriptions.
Parameters:
name(str | None, default:None) –The name of the tool.description(str | None, default:None) –The description of the tool.catch(bool | Iterable[type[Exception]] | None, default:None) –Whether to catch exceptions and return them as messages.False: Do not catch exceptions.True: Catch all exceptions.list[type[Exception]]: Catch only the specified exceptions.None: By default, catchesjson.JSONDecodeErrorand `ValidationError
truncate(int | None, default:None) –If set, the maximum number of characters to truncate any tool output to.offload(bool | None, default:None) –Whether large tool outputs should be offloaded to disk.
Returns:
Tool[P, R]–A new tool with the updated parameters.
ToolMethod
Section titled “ToolMethod”ToolMethod( fget: Callable[..., Any], name: str, description: str, *, catch: bool | Iterable[type[Exception]] | None, parameters_schema: dict[str, Any], truncate: int | None, signature: Signature, type_adapter: TypeAdapter[Any],)A descriptor that acts as a factory for creating bound Tool instances.
It inherits from property to be ignored by pydantic’s ModelMetaclass
during field inspection. This prevents validation errors which would
otherwise treat the descriptor as a field and stop tool_method decorators
from being applied in BaseModel classes.
Toolset
Section titled “Toolset”A Pydantic-based class for creating a collection of related, stateful tools.
Inheriting from this class provides:
- Pydantic’s declarative syntax for defining state (fields).
- Automatic application of the
@configurabledecorator. - A
get_toolsmethod for discovering methods decorated with@dreadnode.tool_method. - Support for async context management, with automatic re-entrancy handling.
name: strThe name of the toolset, derived from the class name.
variant
Section titled “variant”variant: str | None = NoneThe variant for filtering tools available in this toolset.
offload_tool_output
Section titled “offload_tool_output”offload_tool_output( content: str, tool_call_id: str, tool_name: str) -> tuple[str, Path]Write tool output to disk and return middle-out summary plus file path.
Output lands at <cache>/tool-output/<YYYYMMDD-HHMMSS>-<tool_call_id>.txt,
where <cache> is the active Dreadnode instance’s cache directory
(~/.dreadnode by default; honors configure(cache=...)).
tool( func: None = None, /, *, name: str | None = None, description: str | None = None, catch: bool | Iterable[type[Exception]] | None = None, truncate: int | None = None,) -> t.Callable[[t.Callable[P, R]], Tool[P, R]]tool(func: Callable[P, R]) -> Tool[P, R]tool( func: Callable[P, R] | None = None, /, *, name: str | None = None, description: str | None = None, catch: bool | Iterable[type[Exception]] | None = None, truncate: int | None = None,) -> ( t.Callable[[t.Callable[P, R]], Tool[P, R]] | Tool[P, R])Decorator for creating a Tool, useful for overriding a name or description.
Parameters:
func(Callable[P, R] | None, default:None) –The function to wrap.name(str | None, default:None) –The name of the tool.description(str | None, default:None) –The description of the tool.catch(bool | Iterable[type[Exception]] | None, default:None) –Whether to catch exceptions and return them as messages.False: Do not catch exceptions.True: Catch all exceptions.list[type[Exception]]: Catch only the specified exceptions.None: By default, catchesjson.JSONDecodeErrorandValidationError.
truncate(int | None, default:None) –If set, the maximum number of characters to truncate any tool output to.
Returns:
Callable[[Callable[P, R]], Tool[P, R]] | Tool[P, R]–The decorated Tool object.
Example
@tool(name="add_numbers", description="This is my tool")def add(x: int, y: int) -> int: return x + ytool_method
Section titled “tool_method”tool_method( func: None = None, /, *, variants: list[str] | None = None, name: str | None = None, description: str | None = None, catch: bool | Iterable[type[Exception]] | None = None, truncate: int | None = None,) -> t.Callable[ [t.Callable[t.Concatenate[t.Any, P], R]], ToolMethod[P, R],]tool_method( func: Callable[Concatenate[Any, P], R],) -> ToolMethod[P, R]tool_method( func: Callable[Concatenate[Any, P], R] | None = None, /, *, variants: list[str] | None = None, name: str | None = None, description: str | None = None, catch: bool | Iterable[type[Exception]] | None = None, truncate: int | None = None,) -> ( t.Callable[ [t.Callable[t.Concatenate[t.Any, P], R]], ToolMethod[P, R], ] | ToolMethod[P, R])Marks a method on a Toolset as a tool, adding it to specified variants.
Use this for any method inside a class that inherits from dreadnode.Toolset
to ensure it’s discoverable.
Parameters:
variants(list[str] | None, default:None) –A list of variants this tool should be a part of. If None, it’s added to a “all” variant.name(str | None, default:None) –Override the tool’s name. Defaults to the function name.description(str | None, default:None) –Override the tool’s description. Defaults to the docstring.catch(bool | Iterable[type[Exception]] | None, default:None) –Whether to catch exceptions and return them as messages.False: Do not catch exceptions.True: Catch all exceptions.list[type[Exception]]: Catch only the specified exceptions.None: By default, catchesjson.JSONDecodeErrorandValidationError.
truncate(int | None, default:None) –The maximum number of characters for the tool’s output. Continue
Continue execution, optionally with feedback to guide the agent.
log_metrics
Section titled “log_metrics”log_metrics(*, step: int) -> NoneRecord continuation metrics for tracing and analytics.
log_metrics
Section titled “log_metrics”log_metrics(*, step: int) -> NoneRecord retry metrics for tracing and analytics. Agent-specific stopping hooks.
This module provides hooks that return Finish() to stop agent execution. Each factory function returns a Hook instance that can be passed to Agent(hooks=[…]).
any_tool_use
Section titled “any_tool_use”any_tool_use( *, count: int = 1, name: str | None = None) -> HookStop after any tool has been used a specified number of times.
Parameters:
count(int, default:1) –The total number of tool uses to trigger stopping.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish after any tools are used the specified number of times.
consecutive_errors
Section titled “consecutive_errors”consecutive_errors( count: int, *, name: str | None = None) -> HookStop if there are consecutive tool errors.
Parameters:
count(int) –The number of consecutive errors before stopping.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish after consecutive errors.
elapsed_time
Section titled “elapsed_time”elapsed_time( max_seconds: float, *, name: str | None = None) -> HookStop if the total execution time exceeds a given duration.
Parameters:
max_seconds(float) –The maximum number of seconds the agent is allowed to run.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish when elapsed time exceeds the limit.
estimated_cost
Section titled “estimated_cost”estimated_cost( limit: float, *, name: str | None = None) -> HookStop if the estimated cost of LLM generations exceeds a limit.
Parameters:
limit(float) –The maximum cost allowed (USD).name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish when estimated cost exceeds the limit.
generation_count
Section titled “generation_count”generation_count( max_generations: int, *, name: str | None = None) -> HookStop after a maximum number of LLM generations (inference calls).
This is slightly more robust than using step_count as retry calls
to the LLM will also count towards this limit.
Parameters:
max_generations(int) –The maximum number of LLM generations to allow.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish after the specified number of generations.
no_new_tool_used
Section titled “no_new_tool_used”no_new_tool_used( for_steps: int, *, name: str | None = None) -> HookStop if the agent goes for a number of consecutive steps without using a new tool.
A “new tool” is one that hasn’t been used in any prior step. This detects stagnation where the agent keeps calling the same tools repeatedly.
Parameters:
for_steps(int) –The number of consecutive steps without a new tool use before the agent should stop.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish when no new tools are used for the specified steps.
no_tool_calls
Section titled “no_tool_calls”no_tool_calls( for_steps: int = 1, *, name: str | None = None) -> HookStop if the agent goes for a number of steps without making any tool calls.
Parameters:
for_steps(int, default:1) –The number of consecutive steps without any tool calls.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish when no tool calls are made for the specified steps.
output
Section titled “output”output( pattern: str | Pattern[str], *, case_sensitive: bool = False, exact: bool = False, regex: bool = False, name: str | None = None,) -> HookStop if a specific string or pattern is mentioned in the last generated message.
Parameters:
pattern(str | Pattern[str]) –The string or compiled regex pattern to search for.case_sensitive(bool, default:False) –If True, the match is case-sensitive.exact(bool, default:False) –If True, performs an exact string match instead of containment.regex(bool, default:False) –If True, treats thepatternstring as a regular expression.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish when the pattern is found in the output.
step_count
Section titled “step_count”step_count( max_steps: int, *, name: str | None = None) -> HookStop after a maximum number of agent steps.
Parameters:
max_steps(int) –The maximum number of steps to allow.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish after the specified number of steps.
token_usage
Section titled “token_usage”token_usage( limit: int, *, mode: Literal["total", "in", "out"] = "total", name: str | None = None,) -> HookStop if the token usage exceeds a specified limit.
Parameters:
limit(int) –The maximum number of tokens allowed.mode(Literal['total', 'in', 'out'], default:'total') –Which token count to consider: “total”, “in”, or “out”.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish when token usage exceeds the limit.
tool_error
Section titled “tool_error”tool_error( tool_name: str | None = None, *, name: str | None = None) -> HookStop if any tool call results in an error.
Parameters:
tool_name(str | None, default:None) –If specified, only considers errors from this tool.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish when a tool error occurs.
tool_output
Section titled “tool_output”tool_output( pattern: str | Pattern[str], *, tool_name: str | None = None, case_sensitive: bool = False, exact: bool = False, regex: bool = False, name: str | None = None,) -> HookStop if a specific string or pattern is found in the output of a tool call.
Parameters:
pattern(str | Pattern[str]) –The string or compiled regex pattern to search for.tool_name(str | None, default:None) –If specified, only considers outputs from this tool.case_sensitive(bool, default:False) –If True, the match is case-sensitive.exact(bool, default:False) –If True, performs an exact string match instead of containment.regex(bool, default:False) –If True, treats thepatternstring as a regular expression.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish when the pattern is found in tool output.
tool_use
Section titled “tool_use”tool_use( tool_name: str, *, count: int = 1, name: str | None = None,) -> HookStop after a specific tool has been successfully used.
Parameters:
tool_name(str) –The name of the tool to monitor.count(int, default:1) –The number of times the tool must be used to trigger stopping.name(str | None, default:None) –Optional name for the hook.
Returns:
Hook–A Hook that returns Finish after the tool is used the specified number of times. Optional agent hooks: tool metrics and conversation summarization.
These hooks are opt-in — users register them explicitly on an Agent via
the hooks= constructor argument. Transient-error backoff is handled inline
by the agent loop (see Agent._try_backoff) and is not a hook.
find_summarization_boundary
Section titled “find_summarization_boundary”find_summarization_boundary( messages: list[Message], min_messages_to_keep: int = 10, max_summarize_chars: int | None = None,) -> intFind a clean message boundary for summarization.
Walks messages from the start and enumerates every safe split point that
leaves at least min_messages_to_keep messages in the “keep” portion.
A boundary is safe when both sides of the cut are API-valid chat
sequences — no orphaned tool_calls and no orphaned tool responses.
Two kinds of positions qualify:
- After a simple assistant message (no
tool_calls) — the natural end of a complete conversational turn. - After a complete tool-call group — every
tool_call.idfrom a precedingassistantmessage has a matchingtoolresponse. The cut falls after the last matching tool response, so neither side has a dangling tool call or result.
When max_summarize_chars is provided, returns the largest safe split
whose cumulative len(str(message)) stays within the cap. This keeps
the summarizer call from overflowing the same provider context that
triggered recovery. str(message) is exactly what the summarizer
receives (see Agent._try_overflow_recovery) so the cap and the actual
serialized input measure the same string — including elision of image
URLs (ContentImageUrl.__str__) and tool-call arguments
(ToolCall.__str__).
Returns:
int–Index splittingmessages[:boundary](to summarize) fromint–messages[boundary:](to keep). Returns0when no validint–boundary exists.
summarize_conversation
Section titled “summarize_conversation”summarize_conversation( generator: str | Generator, conversation: str, *, guidance: str = "",) -> SummaryRun the summarization prompt against the given generator and return a Summary.
tool_metrics
Section titled “tool_metrics”tool_metrics(*, detailed: bool = False) -> HookCreates an agent hook to log metrics about tool usage, execution time, and success rates.
Parameters:
detailed(bool, default:False) –If True, logs metrics for each specific tool in addition to general stats. If False, only logs aggregate statistics across all tools.
Returns:
Hook–A Hook instance that can be registered with an agent. AgentEnd
Event: The agent’s execution process has finished.
Attributes:
stop_reason(AgentStopReason) –The reason why the agent stopped, if applicable.error(SerializableException | str | None) –The error that caused the agent to stop, if applicable.
AgentError
Section titled “AgentError”Event: An error occurred, functionally halting the agent.
Attributes:
error(SerializableError) –The error that occurred during the agent’s execution.
AgentEvent
Section titled “AgentEvent”A log event in the agent’s lifecycle.
Attributes:
timestamp(datetime) –The timestamp of when the event occurred (UTC).agent_id(UUID) –The name of the agent that generated this event.agent_name(str | None) –The name of the agent that generated this event.status(AgentStatus | None) –The status of the agent at the time of this event.metrics(dict[str, MetricSeries]) –Metrics attached to this event by scoring conditions.
as_dict
Section titled “as_dict”as_dict() -> dict[str, t.Any]Serialize event for frontend transport.
emit(span: TaskSpan) -> NoneEmit this event’s telemetry to the span.
Events own their telemetry - this method defines what attributes, metrics, inputs, and outputs each event type logs.
Override in subclasses to add event-specific telemetry.
AgentStalled
Section titled “AgentStalled”Event: The agent is stalled and there are no tool calls, or stop condition).
AgentStart
Section titled “AgentStart”Event: The agent’s execution process has started.
Attributes:
inputs(dict[str, Any]) –The inputs provided to the agent at the start of execution.params(dict[str, Any]) –The parameters used to configure the agent at the start of execution.
AgentStep
Section titled “AgentStep”A discrete unit of work that advances the agent’s state.
A Step is an Event that contains messages that will be part of the ongoing chat history.
Additionally, tracks step count, token usage, etc.
Attributes:
generator(Generator | None) –The model or generator used by the agent during this step.step(int) –The step number in the agent’s execution when this event occurred.messages(list[Message]) –The messages generated or processed during this step.usage(Usage) –The token usage associated with this step, if applicable.error(SerializableException | None) –An optional error that occurred during this step’s execution.stop(bool | None) –Indicates if this step signals a stop condition for the agent.estimated_cost(float | None) –Estimates the cost of the agent run based on total token usage and model pricing.
CompactionEvent
Section titled “CompactionEvent”Lifecycle event for session compaction (CMP-LIFE-001).
This is a lifecycle signal, not a trajectory step — it extends AgentEvent, not AgentStep, so it does not carry messages or get added to the trajectory.
GenerationContent
Section titled “GenerationContent”Event: The LLM produced content, emitted before tool execution.
This is a TUI rendering signal — it carries the generation text so it can be displayed immediately, before tools run. GenerationEnd/GenerationStep still fire after tools for trajectory, hooks, and telemetry.
Attributes:
step(int) –The step number.content(str | None) –The generated text content.tool_calls(list[dict[str, Any]]) –Tool calls requested by the generation.extra(dict[str, Any]) –Additional metadata (reasoning_content, etc.).
GenerationEnd
Section titled “GenerationEnd”Event: The agent has completed a generation step.
Attributes:
generator(Generator | None) –The model or generator used by the agent during this step.stop_reason(str | None) –Why the generation stopped (end_turn, tool_use, max_tokens, etc.).
GenerationError
Section titled “GenerationError”Event: An error occurred during a generation step
Attributes:
generator(Generator | None) –The model or generator used by the agent during this step.error(SerializableError) –The error that occurred during the generation step.step(int) –The step number in the agent’s execution.messages(list[Message]) –The conversation messages at the time of failure (for recovery hooks).
GenerationRetry
Section titled “GenerationRetry”Lifecycle event: the agent is about to sleep and retry a failed generation.
Emitted by the agent loop when a transient LLM API error (rate limit, etc.)
is recovered in place via Agent._try_backoff. This is a lifecycle signal
only — it does not consume a step or land in the trajectory.
GenerationStart
Section titled “GenerationStart”Event: The agent is starting a generation step.
Attributes:
generator(Generator | None) –The model or generator used by the agent during this step.step(int) –The step number in the agent’s execution.messages(list[Message]) –The input messages being sent to the model.
GenerationStep
Section titled “GenerationStep”A step representing a call to the generator.
Attributes:
generator(Generator | None) –The model or generator used by the agent during this step.stop_reason(str | None) –Why the generation stopped (end_turn, tool_use, max_tokens, etc.).extra(dict[str, Any]) –Additional metadata from the generator/chat.generation_failed(bool) –Whether the generation failed.
Heartbeat
Section titled “Heartbeat”Event: Keepalive signal emitted during long-running operations.
Used to indicate that the agent is still processing when no other events have been emitted for a period of time. This helps frontends detect whether the stream is still active vs. stalled.
Attributes:
message(str) –Optional status message describing current activity.
ReactStep
Section titled “ReactStep”A step representing a reaction from a hook.
ReactStep is an AgentStep because reactions can provide feedback to the LLM through messages (e.g., Continue with modified messages, RetryWithFeedback).
Note: The hook dispatch system filters out ReactStep when calling hooks that listen to AgentStep, preventing hooks from reacting to their own reactions.
Attributes:
hook_name(str | None) –The name of the hook that generated this event.reaction(Reaction | None) –The reaction taken by the hook.
ToolEnd
Section titled “ToolEnd”Event: A tool call has completed.
A non-empty error means the tool ran to completion but reported
a failure (e.g. bash non-zero exit, @tool(catch=True) swallowing
an exception, or an MCP server returning isError=true). Uncaught
exceptions go through :class:ToolError instead.
Attributes:
tool_call(ToolCall) –The tool call that was completed.result(str | None) –The result returned by the tool, if applicable.stop(bool) –Whether this tool requested the agent to stop.error(str | None) –A failure message lifted frommessage.metadata['error'].error_type(str | None) –Exception class name when the error was sourced from an :class:ErrorModelcarrying that metadata.
ToolError
Section titled “ToolError”Event: An error occurred during a tool call.
Attributes:
tool_call(ToolCall) –The tool call that caused the error.error(SerializableError) –The error that occurred during the tool call.
ToolStart
Section titled “ToolStart”Event: A tool call is about to be executed.
Attributes:
tool_call(ToolCall) –The tool call that is being started.
ToolStep
Section titled “ToolStep”A step representing the completion of a tool call by the agent.
Attributes:
tool_call(ToolCall) –The tool call that was completed.
UserInputRequired
Section titled “UserInputRequired”Event: The agent needs human input to continue.
Emitted when a tool (like ask_the_user) requests input from the user. The agent execution is suspended until the input is provided.
Attributes:
request_id(str) –Unique identifier for this input request.question(str) –The question to ask the user.options(list[str] | None) –Optional list of choices to present to the user.
event_from_dict
Section titled “event_from_dict”event_from_dict(data: dict[str, Any]) -> AgentEventDeserialize a dict back to the appropriate AgentEvent subclass.
Uses the ‘_type’ field to determine the correct class.
event_to_dict
Section titled “event_to_dict”event_to_dict(event: AgentEvent) -> dict[str, t.Any]Serialize an AgentEvent to a JSON-compatible dict for persistence.
Includes a ‘_type’ discriminator for deserialization. Trajectory
Section titled “Includes a ‘_type’ discriminator for deserialization. Trajectory”The Trajectory creates ordered sequence of all events and steps for a single agent run.
agent_id
Section titled “agent_id”agent_id: UUID | None = NoneThe unique identifier for the agent associated with this trajectory.
events
Section titled “events”events: list[AgentEvent] = Field(default_factory=list)The ordered list of events and steps in this trajectory.
messages
Section titled “messages”messages: list[Message]Return the conversation history in logical chat order.
session_id
Section titled “session_id”session_id: UUID = Field(default_factory=uuid4)The unique identifier for this agent session.
steps: list[AgentStep]Returns only the AgentStep instances from the event history.
system_prompt
Section titled “system_prompt”system_prompt: str | None = NoneThe system prompt/instructions used for this trajectory.
usage: UsageCalculates the total usage from all steps in the trajectory.
add_event
Section titled “add_event”add_event(event: AgentEvent) -> NoneAdds a new event or step to the trajectory.
from_dict
Section titled “from_dict”from_dict(data: dict[str, Any]) -> TrajectoryDeserialize a trajectory from a dict.
Parameters:
data(dict[str, Any]) –Dict previously created by to_dict().
Returns:
Trajectory–Reconstructed Trajectory instance.
to_dict
Section titled “to_dict”to_dict() -> dict[str, t.Any]Serialize the trajectory to a JSON-compatible dict for persistence.
Returns:
dict[str, Any]–Dict with session_id, agent_id, system_prompt, and serialized events.
trajectories_to_hf_dataset
Section titled “trajectories_to_hf_dataset”trajectories_to_hf_dataset( trajectories: list[dict[str, Any]], format: str = "messages",) -> DatasetConvert trajectories to a Hugging Face Dataset.
Parameters:
trajectories(list[dict[str, Any]]) –List of trajectory dictsformat(str, default:'messages') –Output format - “messages” (OpenAI), “chat” (TRL), or “turns”
Returns:
Dataset–HF Dataset ready for training
Example
from services.training import load_trajectory_jsonl, trajectories_to_hf_dataset trajectories = load_trajectory_jsonl(”./training.jsonl”) dataset = trajectories_to_hf_dataset(trajectories, format=“chat”) dataset.push_to_hub(“my-org/agent-trajectories”)
trajectory_from_openai_format
Section titled “trajectory_from_openai_format”trajectory_from_openai_format( messages: list[dict[str, Any]], message_class: type | None = None,) -> TrajectoryCreate a Trajectory from OpenAI-format messages.
Parameters:
messages(list[dict[str, Any]]) –List of OpenAI-format message dictsmessage_class(type | None, default:None) –Optional Message class to use (defaults to importing from dreadnode)
Returns:
Trajectory–Trajectory instance
Example
trajectory = trajectory_from_openai_format([ … {“role”: “user”, “content”: “Hello”}, … {“role”: “assistant”, “content”: “Hi there!”} … ])
trajectory_to_jsonl_record
Section titled “trajectory_to_jsonl_record”trajectory_to_jsonl_record( trajectory: Trajectory, system_prompt: str | None = None, tools: list[dict] | None = None, metadata: dict[str, Any] | None = None,) -> dict[str, t.Any]Convert trajectory to a JSONL record for training data export.
This produces a record compatible with NeMo RL, OpenAI fine-tuning, and other frameworks that accept OpenAI-format training data.
Parameters:
trajectory(Trajectory) –The trajectory to convertsystem_prompt(str | None, default:None) –Optional system prompt to prepend (uses trajectory.system_prompt if not provided)tools(list[dict] | None, default:None) –Optional tool definitions used by the agentmetadata(dict[str, Any] | None, default:None) –Optional metadata to include (agent_name, task_type, etc.)
Returns:
dict[str, Any]–Dict ready for JSON serialization
Example
record = trajectory_to_jsonl_record( … agent.trajectory, … metadata={“agent_name”: “MyAgent”, “success”: True} … ) with open(“training.jsonl”, “a”) as f: … f.write(json.dumps(record) + “\n”)
trajectory_to_openai_format
Section titled “trajectory_to_openai_format”trajectory_to_openai_format( trajectory: Trajectory,) -> list[dict[str, t.Any]]Convert a DN Agent Trajectory to OpenAI-compatible message format.
This format is compatible with NeMo RL’s OpenAIFormatDataset.
Parameters:
trajectory(Trajectory) –DN Agent Trajectory object
Returns:
list[dict[str, Any]]–List of OpenAI-format messages with role, content, tool_calls, tool_call_id MCP (Model Context Protocol) client and server utilities.
Provides:
- MCPClient: Connect to MCP servers (stdio, streamable-http)
- mcp(): Factory function for creating clients
- as_mcp(): Serve tools as an MCP server
- FileTokenStorage: Persistent OAuth token storage
- Server config types aligned with the capability spec
DEFAULT_INIT_TIMEOUT
Section titled “DEFAULT_INIT_TIMEOUT”DEFAULT_INIT_TIMEOUT = 30Timeout (seconds) for MCP session init + tool discovery.
INITIALIZE_TIMEOUT
Section titled “INITIALIZE_TIMEOUT”INITIALIZE_TIMEOUT = DEFAULT_INIT_TIMEOUTDeprecated: use DEFAULT_INIT_TIMEOUT.
HttpServerConfig
Section titled “HttpServerConfig”HttpServerConfig( url: str, headers: dict[str, str] | None = None, oauth: OAuthConfig | None = None, timeout: float = DEFAULT_HTTP_TIMEOUT, sse_read_timeout: float = DEFAULT_SSE_READ_TIMEOUT, init_timeout: float = DEFAULT_INIT_TIMEOUT,)Config for remote MCP servers (capability spec: url → streamable-http).
MCPClient
Section titled “MCPClient”MCPClient( transport: Transport | Literal["sse"], connection: StdioConnection | SSEConnection | dict[str, Any], *, oauth: Any = None, init_timeout: float = DEFAULT_INIT_TIMEOUT, log_path: Path | None = None,)A client for communicating with MCP servers.
Supports stdio and streamable-http transports. For streamable-http, SSE is used as an automatic fallback if the server doesn’t support the streamable HTTP protocol.
Can be used as an async context manager or via explicit connect/disconnect:
# Context manager (existing pattern)async with mcp("stdio", command="uv", args=["run", "server"]) as client: agent = Agent(tools=list(client.tools))
# Explicit lifecycle (for managed servers)client = MCPClient.from_config(StdioServerConfig(command="uv"))await client.connect()try: ...finally: await client.disconnect()connection
Section titled “connection”connection: ( StdioConnection | SSEConnection | dict[str, Any]) = connectionConnection configuration
error: str | NoneError message if status is FAILED or NEEDS_AUTH.
log_path
Section titled “log_path”log_path: Path | NonePath that stderr is tee’d to, or None if capture is in-memory only.
Only populated for stdio transports; HTTP transports don’t spawn a subprocess and have nothing to capture.
recent_stderr
Section titled “recent_stderr”recent_stderr: list[str]Captured stderr lines from the subprocess, bounded by the ring buffer.
Mirrors :attr:SubprocessWorkerRunner.recent_output so the TUI can
render the same progressive-disclosure block for MCP servers and
workers. Empty for HTTP transports or before :meth:connect runs.
tools: list[Tool[..., Any]] = []Tools discovered from the server
transport
Section titled “transport”transport: Transport = transportThe transport type
connect
Section titled “connect”connect() -> NoneConnect to the MCP server and discover tools.
Sets status to CONNECTED on success, FAILED or NEEDS_AUTH on error.
disconnect
Section titled “disconnect”disconnect() -> NoneDisconnect from the MCP server.
from_config
Section titled “from_config”from_config( config: ServerConfig, *, log_path: Path | None = None) -> MCPClientCreate a client from a typed server config.
The SDK’s MCP lifecycle manager passes log_path to tee stderr
under ~/.dreadnode/logs/. User-code callers of
:func:dreadnode.agents.mcp don’t need to supply it.
MCPStatus
Section titled “MCPStatus”Status of an MCP server connection.
OAuthConfig
Section titled “OAuthConfig”OAuthConfig( client_name: str = "dreadnode", scope: str | None = None)OAuth configuration for remote MCP servers.
Supports dynamic client registration via the MCP SDK’s OAuthClientProvider. Pre-registered client credentials (client_id/client_secret) will be added when the OAuth callback server lands (Layer 3).
SSEConnection
Section titled “SSEConnection”Deprecated: Use HttpServerConfig instead.
StdioConnection
Section titled “StdioConnection”Deprecated: Use StdioServerConfig instead.
StdioServerConfig
Section titled “StdioServerConfig”StdioServerConfig( command: str, args: list[str] = list(), env: dict[str, str] | None = None, cwd: str | Path | None = None, init_timeout: float = DEFAULT_INIT_TIMEOUT,)Config for stdio MCP servers (capability spec: command → stdio).
__getattr__
Section titled “__getattr__”__getattr__(name: str) -> objectLazy import for optional components.
as_mcp
Section titled “as_mcp”as_mcp(*tools: Any, name: str = 'Rigging Tools') -> FastMCPServe a collection of tools over the Model Context Protocol (MCP).
Creates a FastMCP server instance that exposes your tools to any compliant MCP client.
Parameters:
tools(Any, default:()) –Tool objects, raw Python functions, or class instances with @tool_method methods.name(str, default:'Rigging Tools') –The name of the MCP server.
Example
from dreadnode import toolfrom dreadnode.agents.mcp import as_mcp
@tooldef add_numbers(a: int, b: int) -> int: """Adds two numbers together.""" return a + b
if __name__ == "__main__": as_mcp(add_numbers).run(transport="stdio")mcp( transport: Literal["stdio"], *, command: str, args: list[str] | None = None, cwd: str | Path | None = None, env: dict[str, str] | None = None, init_timeout: float = DEFAULT_INIT_TIMEOUT,) -> MCPClientmcp( transport: Literal["streamable-http"], *, url: str, headers: dict[str, str] | None = None, oauth: Any = None, timeout: float = DEFAULT_HTTP_TIMEOUT, sse_read_timeout: float = DEFAULT_SSE_READ_TIMEOUT, init_timeout: float = DEFAULT_INIT_TIMEOUT,) -> MCPClientmcp( transport: Literal["sse"], *, url: str, headers: dict[str, str] | None = None, timeout: float = DEFAULT_HTTP_TIMEOUT, sse_read_timeout: float = DEFAULT_SSE_READ_TIMEOUT, init_timeout: float = DEFAULT_INIT_TIMEOUT,) -> MCPClientmcp( transport: Transport | Literal["sse"], **kwargs: Any) -> MCPClientCreate an MCP client.
Parameters:
transport(Transport | Literal['sse']) –Transport type — “stdio” or “streamable-http”. “sse” is accepted but deprecated (routes to streamable-http with SSE fallback).
Returns:
MCPClient–An MCPClient instance (use as async context manager or call connect()).
Examples:
# stdio transportasync with mcp("stdio", command="uv", args=["run", "weather-mcp"]) as client: agent = Agent(tools=list(client.tools))
# streamable-http transportasync with mcp("streamable-http", url="https://api.example.com/mcp") as client: agent = Agent(tools=list(client.tools))
# streamable-http with OAuthfrom dreadnode.agents.mcp import OAuthConfigasync with mcp("streamable-http", url="https://...", oauth=OAuthConfig()) as client: agent = Agent(tools=list(client.tools))Skill loader and discovery.
Loads skills from SKILL.md files following the Agent Skills specification. https://agentskills.io/specification
SkillSource
Section titled “SkillSource”SkillSource = Literal['builtin', 'python', 'bundled']The origin of a skill. See CAP-IDENT-001 in specs/capabilities/runtime.md.
Skills have fewer variants than tools — there is no MCP-sourced skill or synthetic skill; skills come from SKILL.md files only.
Skill( name: str, description: str, instructions: str, allowed_tools: list[str] = list(), license: str | None = None, compatibility: str | None = None, metadata: dict[str, str] = dict(), path: Path | None = None, source: SkillSource = "builtin", namespace: tuple[str, ...] = (),)A skill that teaches an agent how to perform a specific task.
Follows the Agent Skills specification exactly: https://agentskills.io/specification
Attributes:
name(str) –Unique skill identifier (lowercase, numbers, hyphens; max 64 chars)description(str) –What the skill does and when to use it (max 1024 chars)instructions(str) –Full markdown instructions (body of SKILL.md)allowed_tools(list[str]) –Tools the skill can use without asking permissionlicense(str | None) –License name or referencecompatibility(str | None) –Environment requirementsmetadata(dict[str, str]) –Arbitrary key-value mappingpath(Path | None) –Path to the SKILL.md file
directory
Section titled “directory”directory: Path | NoneGet the skill directory (parent of SKILL.md).
namespace
Section titled “namespace”namespace: tuple[str, ...] = ()Structural namespace path. Empty for builtin and bundled skills;
(cap,) for capability-sourced skills. See CAP-IDENT-001, CAP-IDENT-009.
qualified_id
Section titled “qualified_id”qualified_id: strUser-facing qualified identifier for this skill.
Projects structural identity (namespace + name) through the :
separator rule (CAP-IDENT-009). Builtin and bundled skills render
bare because their namespace is empty. There is no length cap —
unlike tool wire names, skill identifiers are not constrained by
the LLM function-calling regex.
source
Section titled “source”source: SkillSource = 'builtin'The skill’s origin. Paired with namespace to determine qualified id.
See CAP-IDENT-001. Stamped at the discovery boundary (see
CapabilityRegistry.all_skills).
render_content
Section titled “render_content”render_content() -> strRender full skill content for loading into a conversation.
Produces the same output as the skill tool: instructions,
allowed tools advisory, base directory, and skill file listing.
The <skill_content name> attribute uses the qualified id so
the LLM sees the same identifier it invoked the skill with
(CAP-IDENT-016).
to_dict
Section titled “to_dict”to_dict() -> dict[str, t.Any]Convert to dictionary for serialization.
to_prompt_xml
Section titled “to_prompt_xml”to_prompt_xml() -> strGenerate XML for tool description (metadata only).
Emits the qualified identifier in <name> (CAP-IDENT-016) so the
agent invokes the skill with the same string it sees.
attach_capability_skills
Section titled “attach_capability_skills”attach_capability_skills( *, agent: Any, capability: Capability) -> NoneAttach capability-local skills to the reconstructed agent, if any.
create_skill_tool
Section titled “create_skill_tool”create_skill_tool(skills: list[Skill]) -> t.AnyCreate a single skill tool bound to a list of discovered skills.
Follows the OpenCode pattern: one tool with available skills listed in the description. When invoked, returns the full skill content and a listing of supporting files.
Skills are addressed by qualified identifier (\{cap\}:\{name\}) in
<available_skills> so the LLM always sees a stable, unambiguous handle
(CAP-IDENT-016). Invocation accepts either the qualified id or a bare name
when that bare name is unambiguous across the effective set
(CAP-IDENT-017).
Parameters:
skills(list[Skill]) –List of effective skills to make available. Callers are expected to have already stampedsource/namespaceon each skill (typically viaCapabilityRegistry.all_skills).
Returns:
Any–A single skill tool.
discover_instructions
Section titled “discover_instructions”discover_instructions( directory: Path | None = None,) -> str | NoneDiscover instructions.md in a directory.
Looks for an instructions.md file (with optional YAML frontmatter).
Parameters:
directory(Path | None, default:None) –Directory to search (defaults to cwd)
Returns:
str | None–Instructions string if instructions.md found, None otherwise
discover_skills
Section titled “discover_skills”discover_skills( directory: Path | None = None,) -> list[Skill]Discover skills in a directory.
Scans the directory for subdirectories containing a SKILL.md file. Each valid skill directory is loaded.
Parameters:
directory(Path | None, default:None) –Directory to scan (defaults to cwd)
Returns:
list[Skill]–List of discovered and loaded skills
load_instructions
Section titled “load_instructions”load_instructions(path: Path) -> strLoad instructions from a file with YAML frontmatter.
The file should have the same format as SKILL.md:
---name: my-instructionsdescription: What these instructions do---
# Instructions
Your instructions here...Parameters:
path(Path) –Path to the instructions file
Returns:
str–The markdown instructions (body after frontmatter)
Raises:
ValueError–If the file format is invalid
load_skill
Section titled “load_skill”load_skill(path: Path, *, validate: bool = True) -> SkillLoad a skill from a SKILL.md file.
The file should have YAML frontmatter followed by markdown content:
---name: my-skilldescription: What it doesallowed-tools: tool1 tool2license: Apache-2.0compatibility: Requires git and dockermetadata: author: example-org version: "1.0"---
# My Skill
Instructions here...Parameters:
path(Path) –Path to SKILL.md filevalidate(bool, default:True) –Whether to validate name/description constraints (default True)
Returns:
Skill–Loaded Skill object
Raises:
ValueError–If the file format is invalid or validation fails
resolve_skill
Section titled “resolve_skill”resolve_skill(name: str, skills: Sequence[Skill]) -> SkillResolve a user-supplied skill identifier against a list of effective skills.
Resolution order (CAP-IDENT-017, CAP-IDENT-018):
- Exact qualified-id match (
\{cap\}:\{name\}or bare for builtin/bundled). - Bare-name match if exactly one skill has that bare name.
- Error if bare input is ambiguous; surface qualified candidates.
Raises:
ValueError–skill not found, or bare input is ambiguous. Sub-agent spawning tools for complex task delegation.
Similar to Claude Code’s Task tool, this allows spawning specialized agents to handle specific subtasks autonomously.
SubAgentToolset
Section titled “SubAgentToolset”Toolset for spawning and managing sub-agents.
Requires a parent agent to clone from.
parent_agent
Section titled “parent_agent”parent_agent: Any = Config(default=None)The parent agent to clone sub-agents from.
run_in_background
Section titled “run_in_background”run_in_background: bool = Config(default=False)Whether to run sub-agents in background (not yet implemented).
spawn_agent
Section titled “spawn_agent”spawn_agent( task: Annotated[ str, "The task for the sub-agent to complete" ], agent_type: Annotated[ str, "Agent type: 'explore' (find code), 'plan' (design approach), 'test' (run tests), 'review' (code review), 'general' (any task)", ] = "general", *, custom_instructions: Annotated[ str | None, "Optional custom instructions to override defaults", ] = None,) -> strSpawn a sub-agent to handle a specific task autonomously.
Use this to delegate complex subtasks to specialized agents:
- ‘explore’: Search and understand code
- ‘plan’: Design implementation approach
- ‘test’: Run and verify tests
- ‘review’: Review code for issues
- ‘general’: Any other task
The sub-agent runs to completion and returns its findings.
When to Use
- Complex tasks requiring focused work
- Exploration that might take many steps
- Tasks where you want isolated context
- Parallel work (with run_in_background)
Examples
Explore codebase:
spawn_agent("Find all API endpoint definitions", agent_type="explore")Plan implementation:
spawn_agent("Plan how to add user authentication", agent_type="plan")Parameters:
task(Annotated[str, 'The task for the sub-agent to complete']) –What the sub-agent should accomplish.agent_type(Annotated[str, "Agent type: 'explore' (find code), 'plan' (design approach), 'test' (run tests), 'review' (code review), 'general' (any task)"], default:'general') –Type of agent to spawn.custom_instructions(Annotated[str | None, 'Optional custom instructions to override defaults'], default:None) –Override default instructions.
Returns:
str–The sub-agent’s final response and summary.
create_subagent_tool
Section titled “create_subagent_tool”create_subagent_tool( parent_agent: Agent,) -> SubAgentToolsetCreate a SubAgentToolset bound to a parent agent.
Usage
agent = Agent(…) subagent_tools = create_subagent_tool(agent) agent.tools.append(subagent_tools)