Skip to content

Chat models

Curate the inference models that appear in your assistant picker and manage the provider-key dependencies that gate them.

Chat models is your account-scoped shortlist of inference models — the set that shows up in the assistant picker, the TUI model switcher, and any other surface that asks “which model do you want to run this on?”. Add or remove IDs, track which ones have the provider keys they need, and fall back to Secrets when something’s missing.

Settings → Chat Models
┃ Model ID ┃ Provider ┃ Status ┃
┇ dn/claude-opus-4-6 ┇ Dreadnode ┇ ✓ Ready ┇
┇ openai/gpt-4.1-mini ┇ OpenAI ┇ ✓ Ready ┇
┇ anthropic/claude-opus-4-6 ┇ Anthropic ┇ ⚠ Needs ANTHROPIC_API_KEY┇

The durable state is a list of enabled_model_ids. Every surface that picks a model consults this list:

  • The web assistant picker only shows enabled IDs.
  • The TUI’s Ctrl+K picker groups enabled IDs first; /models can search the broader catalog for one-offs.
  • Evaluations and runtime launches validate the --model flag against the set when the server is SaaS-gated.

If enabled_model_ids is empty, Dreadnode treats that as all available models enabled.

NamespaceWhere it runsWhat you need
dn/<model>Dreadnode-hosted inferenceNothing extra — billed against your credits.
openai/<model>, anthropic/<model>, openrouter/<model>, etc.The provider’s API, using your key (BYOK)The provider’s API key stored in Secrets.

Dreadnode-hosted IDs always show Ready. BYOK models show Ready only when the provider’s expected key name (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) is configured for your user.

The dn/* list is sourced from currently deployed LiteLLM model aliases. When an admin adds or removes a dn/* deployment, it appears in Chat models without a platform redeploy.

Use the model browser in the settings page to search the full catalog (hosted + provider-published BYOK IDs) and enable the ones you want. Remove an enabled model from the table when you stop using it. One constraint: your list must have at least one enabled model at all times.

Adding a model validates the ID against the catalog — typos and unrecognized IDs are rejected before they reach the preference store. Ad-hoc IDs that aren’t in the catalog can still be validated through the LiteLLM compatibility check on the browser.

When a model is missing upstream metadata, Dreadnode generates a readable name from the ID and preserves dotted version segments (for example, claude-opus-4-5 displays as Claude Opus 4.5).

The model stays in your enabled list but won’t resolve for new runs until the required key is configured. Fix the gap in two steps:

  1. Open Secrets and add a provider key (e.g. OPENAI_API_KEY).
  2. Reload Chat Models — the status flips to Ready.

Missing keys don’t remove the model from your list; they just gate its availability. Rotating or deleting a key flips the status back to Needs X_API_KEY on the next check.

These are different resources that share a noun:

SurfaceScopeWhat it manages
Chat models (this page)User preferenceWhich inference model IDs appear in your picker and whether they’re ready.
Models registryOrg registryVersioned weight artifacts published from training or curation.

A registry push (dn model push ./support-assistant) doesn’t automatically make the artifact available as a chat model — those are stored weights, not hosted inference endpoints. Serve an artifact yourself (vLLM, Ray Serve, a managed endpoint) before it becomes a --model target.

The chat-models list sets the shortlist. Session-time picking chooses from it:

  • Agent & model covers Ctrl+K, /model, per-agent overrides, and thinking-effort tuning.
  • Evaluation and runtime launches pass --model <id> and select from the enabled set.
  • The TUI’s /models command can still search the broader catalog for one-off testing outside your shortlist.
  • Secrets — where BYOK provider keys live
  • Agent & model — picking a model for the session in front of you
  • Models — versioned artifact registry (distinct from inference)