Chat models
Curate the inference models that appear in your assistant picker and manage the provider-key dependencies that gate them.
Chat models is your account-scoped shortlist of inference models — the set that shows up in the assistant picker, the TUI model switcher, and any other surface that asks “which model do you want to run this on?”. Add or remove IDs, track which ones have the provider keys they need, and fall back to Secrets when something’s missing.
Settings → Chat Models┃ Model ID ┃ Provider ┃ Status ┃┇ dn/claude-opus-4-6 ┇ Dreadnode ┇ ✓ Ready ┇┇ openai/gpt-4.1-mini ┇ OpenAI ┇ ✓ Ready ┇┇ anthropic/claude-opus-4-6 ┇ Anthropic ┇ ⚠ Needs ANTHROPIC_API_KEY┇What the preference controls
Section titled “What the preference controls”The durable state is a list of enabled_model_ids. Every surface that picks a model consults this list:
- The web assistant picker only shows enabled IDs.
- The TUI’s
Ctrl+Kpicker groups enabled IDs first;/modelscan search the broader catalog for one-offs. - Evaluations and runtime launches validate the
--modelflag against the set when the server is SaaS-gated.
If enabled_model_ids is empty, Dreadnode treats that as all available models enabled.
Model namespaces
Section titled “Model namespaces”| Namespace | Where it runs | What you need |
|---|---|---|
dn/<model> | Dreadnode-hosted inference | Nothing extra — billed against your credits. |
openai/<model>, anthropic/<model>, openrouter/<model>, etc. | The provider’s API, using your key (BYOK) | The provider’s API key stored in Secrets. |
Dreadnode-hosted IDs always show Ready. BYOK models show Ready only when the provider’s expected key name (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) is configured for your user.
The dn/* list is sourced from currently deployed LiteLLM model aliases. When an admin adds or removes a dn/* deployment, it appears in Chat models without a platform redeploy.
Add or remove a model
Section titled “Add or remove a model”Use the model browser in the settings page to search the full catalog (hosted + provider-published BYOK IDs) and enable the ones you want. Remove an enabled model from the table when you stop using it. One constraint: your list must have at least one enabled model at all times.
Adding a model validates the ID against the catalog — typos and unrecognized IDs are rejected before they reach the preference store. Ad-hoc IDs that aren’t in the catalog can still be validated through the LiteLLM compatibility check on the browser.
When a model is missing upstream metadata, Dreadnode generates a readable name from the ID and preserves dotted version segments (for example, claude-opus-4-5 displays as Claude Opus 4.5).
When a model shows “Needs X_API_KEY”
Section titled “When a model shows “Needs X_API_KEY””The model stays in your enabled list but won’t resolve for new runs until the required key is configured. Fix the gap in two steps:
- Open Secrets and add a provider key (e.g.
OPENAI_API_KEY). - Reload Chat Models — the status flips to Ready.
Missing keys don’t remove the model from your list; they just gate its availability. Rotating or deleting a key flips the status back to Needs X_API_KEY on the next check.
Chat models vs the registry
Section titled “Chat models vs the registry”These are different resources that share a noun:
| Surface | Scope | What it manages |
|---|---|---|
| Chat models (this page) | User preference | Which inference model IDs appear in your picker and whether they’re ready. |
| Models registry | Org registry | Versioned weight artifacts published from training or curation. |
A registry push (dn model push ./support-assistant) doesn’t automatically make the artifact available as a chat model — those are stored weights, not hosted inference endpoints. Serve an artifact yourself (vLLM, Ray Serve, a managed endpoint) before it becomes a --model target.
Chat models vs session picking
Section titled “Chat models vs session picking”The chat-models list sets the shortlist. Session-time picking chooses from it:
- Agent & model covers
Ctrl+K,/model, per-agent overrides, and thinking-effort tuning. - Evaluation and runtime launches pass
--model <id>and select from the enabled set. - The TUI’s
/modelscommand can still search the broader catalog for one-off testing outside your shortlist.
Related
Section titled “Related”- Secrets — where BYOK provider keys live
- Agent & model — picking a model for the session in front of you
- Models — versioned artifact registry (distinct from inference)