Skip to content

Configuration

Full values reference for self-hosted Dreadnode — data stores, TLS, sandboxes, email, OAuth, and tuning.

Helm CLI customers configure Dreadnode through a values overlay passed to helm install. Admin Console customers (Embedded Cluster / KOTS) configure through the config screen. Both paths set the same underlying chart values — this page documents the full surface.

Values live at two levels:

  • global.* — umbrella chart. Domain, scheme, TLS, ingress, resource preset.
  • dreadnode-api.config.* — API subchart. Data stores, sandbox provider, email, OAuth, logging, auth policy, worker tuning.

The Helm Install page covers the minimum viable overlay (global.domain + optional TLS). This page covers everything else.

global:
domain: dreadnode.example.com # REQUIRED — chart fails without it
scheme: https # http (default) or https

The domain appears in every URL the platform generates — OAuth redirects, presigned S3 URLs, password reset links. scheme controls whether those URLs use http:// or https://. Set both correctly before first use; changing them later requires a redeploy.

Admin Console: Identity → Domain, URL Scheme.

global:
tls:
secretName: dreadnode-tls # kubernetes.io/tls Secret in the install namespace
skipCheck: false # set true when TLS terminates upstream

See Helm Install — TLS for the full setup walkthrough.

Admin Console: Networking & TLS → TLS Certificate Secret Name.

global:
ingress:
className: traefik # match your ingress controller
annotations: {} # controller-specific annotations

Annotations cascade to every subchart ingress (API, frontend, MinIO). Per-subchart overrides are available at dreadnode-api.ingress.annotations, etc.

Admin Console: Networking & TLS → Ingress Class Name.

global:
resourcesPreset: small # small | medium | large

Applied to every subchart. Preset values for the API pod:

  • small — 250m/512Mi requests, 500m/1Gi limits
  • medium — 500m/1Gi requests, 1000m/2Gi limits
  • large — 1000m/2Gi requests, 4000m/8Gi limits

Override per-subchart with explicit resources: blocks when presets don’t fit.

Admin Console: Resource Sizing.

In-cluster by default. Switch to external to point at RDS or another managed service.

No configuration needed. The chart deploys a single-replica PostgreSQL StatefulSet with auto-generated credentials.

dreadnode-api:
endpoints:
database:
external: my-rds-instance.region.rds.amazonaws.com
credentials:
database:
source: externalSecret
secretName: dreadnode-external-pg # KOTS creates this; Helm customers pre-create it
config:
database:
port: 5432
name: platform
user: admin
useSsl: true # recommended for all managed Postgres
useIamAuth: false # set true for RDS IAM auth (no static password)
dreadnode-base:
postgresql:
enabled: false # disable the in-cluster StatefulSet

For Helm CLI customers, pre-create the Secret:

Terminal window
kubectl -n <namespace> create secret generic dreadnode-external-pg \
--from-literal=password='<db-password>'

For IAM auth (useIamAuth: true), the API pod’s service account needs an IAM role with rds-db:connect permission. Configure IRSA via:

dreadnode-api:
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/dreadnode-api

Admin Console: Data Stores → PostgreSQL → “Connect to an external database”, then fill in host, port, database, user, password, SSL, and IAM auth fields.

In-cluster by default. Switch to external for managed ClickHouse.

dreadnode-api:
endpoints:
clickhouse:
external: my-clickhouse.example.com
credentials:
clickhouse:
source: externalSecret
secretName: dreadnode-external-ch
config:
clickhouse:
protocol: https # http (default) or https
port: 8443 # adjust for your service
database: default
user: admin
dreadnode-base:
clickhouse:
enabled: false

Pre-create the Secret:

Terminal window
kubectl -n <namespace> create secret generic dreadnode-external-ch \
--from-literal=admin-password='<ch-password>'

Admin Console: Data Stores → ClickHouse → “Connect to an external service.”

In-cluster MinIO by default. Switch to external for AWS S3 or another S3-compatible service.

dreadnode-api:
endpoints:
s3:
internal: '' # leave empty for AWS S3 (uses default endpoint)
external: https://s3.us-east-1.amazonaws.com
credentials:
s3:
source: static # static | iam | minio
accessKeyId: AKIA...
secretAccessKey: <secret>
config:
s3:
region: us-east-1
buckets:
pythonPackages: my-packages-bucket
orgData: my-org-data-bucket
userDataLogs: my-logs-bucket
sdk:
userDataRoleArn: arn:aws:iam::123456789012:role/dreadnode-user-data
stsDurationSeconds: 3600
dreadnode-base:
minio:
enabled: false

For IAM-based credentials (source: iam), omit accessKeyId and secretAccessKey and configure IRSA on the API service account instead.

The userDataRoleArn is the IAM role the API assumes when minting scoped workspace credentials via STS. It must trust the API pod’s identity and have s3:* on the orgData bucket.

Admin Console: Data Stores → S3/MinIO → “Connect to an external service.”

dreadnode-api:
config:
sandboxProvider: opensandbox # opensandbox (default) or e2b

OpenSandbox (default) runs sandboxes on-cluster using the dreadnode-sandbox-controller and dreadnode-sandbox-server subcharts. No additional configuration needed.

E2B offloads sandboxes to E2B’s cloud. Requires outbound internet and an API key:

dreadnode-api:
config:
sandboxProvider: e2b
extraEnv:
- name: E2B_API_KEY
value: <your-e2b-key>
# Optionally disable the on-cluster sandbox subcharts to reclaim resources
dreadnode-sandbox-controller:
enabled: false
dreadnode-sandbox-server:
enabled: false

Admin Console: Sandbox Runtime → OpenSandbox or E2B.

The default is no email — verification URLs are logged at WARNING level by the API pod, and an operator copies them out. This is the expected path for most enterprise installs.

To wire an SMTP relay:

dreadnode-api:
config:
email:
provider: smtp
fromAddress: noreply@example.com
fromName: Dreadnode
smtp:
host: smtp.example.com
port: 587
user: apikey
useTls: true
existingSecret: dreadnode-smtp-password
passwordKey: password

Pre-create the SMTP password Secret:

Terminal window
kubectl -n <namespace> create secret generic dreadnode-smtp-password \
--from-literal=password='<smtp-password>'

Admin Console: Not exposed on the config screen. Helm-only via dreadnode-api.config.email.*.

Local password auth is the default. GitHub and Google login can be added independently.

dreadnode-api:
config:
oauth:
github:
clientId: <github-client-id>
existingSecret: dreadnode-github-oauth
clientSecretKey: clientSecret
dreadnode-api:
config:
oauth:
google:
clientId: <google-client-id>
existingSecret: dreadnode-google-oauth
clientSecretKey: clientSecret

Pre-create the corresponding Secret for each provider. The chart does not create or manage OAuth client secrets.

Admin Console: Not exposed on the config screen. Helm-only via dreadnode-api.config.oauth.*.

dreadnode-api:
config:
logging:
level: info # trace | debug | info | warning | error | critical
structured: false # true = JSON logs for aggregators (Splunk, Datadog, ELK)

debug is the right choice during an incident. trace is extremely verbose — only useful for framework-level debugging.

Admin Console: Logging → Log Level, Structured JSON.

dreadnode-api:
config:
auth:
minPasswordLength: 12 # default: 8
emailRegexes:
- '^.*@example\.com$' # restrict signups to a domain

Admin Console: Not exposed on the config screen. Helm-only.

Each API pod runs in-process workers for evaluations, Worlds jobs, training, and optimization. Default concurrency is 1 per worker type per pod.

dreadnode-api:
config:
workers:
concurrency:
evaluation: 2
worlds: 2
training: 1
optimization: 1

Raise these when a queue is backing up and the API pod has CPU/memory headroom. This is the primary scaling lever before adding more API replicas.

Admin Console: Not exposed on the config screen. Helm-only.

For configuration not covered by the values schema, inject env vars directly:

dreadnode-api:
extraEnv:
- name: SOME_FEATURE_FLAG
value: 'true'
extraEnvFrom:
- secretRef:
name: my-extra-secrets

The repo expects configuration to be centralized under platform/envs/. The most important values for a self-hosted deployment are:

VariablePurpose
ENVIRONMENTSelects the environment profile such as local, dev, staging, or prod
DEPLOYMENT_MODEChooses saas or enterprise behavior
CORS_ORIGINSExplicit origin allow-list for browser clients
FRONTEND_URL_OVERRIDEForces the frontend base URL when it should not be derived from PROTOCOL and TLD
SECRET_KEYCore app secret for signing and internal security flows
JWT_SECRET_KEYAccess-token signing secret
VariablePurpose
DATABASE_HOSTPostgreSQL host
DATABASE_PORTPostgreSQL port
DATABASE_NAMEPostgreSQL database name
DATABASE_USERPostgreSQL username
DATABASE_PASSWORDPostgreSQL password unless IAM auth is enabled
DATABASE_USE_IAM_AUTHSwitches database auth to IAM token mode for RDS proxy style deployments
RO_READER_DB_PASSWORDPassword used by Alembic migrations to provision/update the ro_reader PostgreSQL role
CLICKHOUSE_USERClickHouse user
CLICKHOUSE_DATABASEClickHouse database
USE_DUCKDBDevelopment toggle for alternate local analytics storage paths; ClickHouse remains the recommended default
USE_SHARED_MERGE_TREE_OVERRIDEForce self-hosted ClickHouse away from cloud-only SharedMergeTree behavior
VariablePurpose
S3_AWS_ENDPOINT_URLInternal S3 or MinIO endpoint
S3_AWS_ACCESS_KEY_IDObject-storage access key
S3_AWS_SECRET_ACCESS_KEYObject-storage secret
ORG_DATA_BUCKET_NAMEMain organization data bucket
VariablePurpose
RECAPTCHA_ENABLEDEnables or disables Recaptcha validation
RECAPTCHA_PUBLIC_KEYBrowser-side Recaptcha key when enabled
RECAPTCHA_SECRET_KEYServer-side Recaptcha verification key
LITELLM_ENABLEDEnables LiteLLM key provisioning, admin routes, and sandbox env injection
LITELLM_INTERNAL_URLAPI-to-LiteLLM URL for admin APIs
LITELLM_PUBLIC_URLOpenAI-compatible LiteLLM base URL injected into sandboxes and TUI sessions
LITELLM_MASTER_KEYShared auth key for LiteLLM proxy access
LITELLM_SALT_KEYStable root secret for encrypted LiteLLM runtime credentials
LITELLM_DATABASE_URLLiteLLM Prisma database URL, usually with ?schema=litellm
LITELLM_TUI_KEY_DURATION_SECONDSTTL for TUI inference keys
LITELLM_BUDGET_FLOAT_BUFFER_USDSaaS-only budget headroom used when syncing credits to LiteLLM team budgets
STRIPE_SECRET_KEYStripe API key for SaaS billing
STRIPE_WEBHOOK_SECRETStripe webhook verification secret
STRIPE_PRICE_IDStripe price identifier for credit purchases

Use platform/envs/ as the source of truth:

  • platform/envs/local.env for local development
  • platform/envs/{env}.env for committed non-secret configuration
  • platform/envs/{env}.secrets.enc for encrypted secrets

That split keeps non-sensitive settings in version control while preserving encrypted secrets for deployed environments.

The API supports two database authentication modes:

  • DATABASE_USE_IAM_AUTH=false (default): password-based authentication using DATABASE_PASSWORD
  • DATABASE_USE_IAM_AUTH=true: IAM auth token injection for RDS Proxy connections (no static DB password required at runtime)

For migration-time role provisioning, set LITELLM_DB_PASSWORD and RO_READER_DB_PASSWORD in deployment environments. Local development can omit them.

  • CORS_ORIGINS falls back to the derived frontend URL if you do not override it explicitly.
  • In local development, platform/envs/local.example.env defaults to enterprise mode. If you switch to saas mode, mock Stripe values are provided so the app can boot without a live billing integration — but inference key provisioning will require a credit balance.
  • For self-hosted ClickHouse, keep USE_SHARED_MERGE_TREE_OVERRIDE=false unless you know you are on a compatible managed ClickHouse setup.
  • In dev environments, TAILNET_ID can help derive LITELLM_PUBLIC_URL when you do not want to hardcode it.
  • If LITELLM_DATABASE_URL points at the app Postgres database, include ?schema=litellm so LiteLLM’s Prisma tables stay separate from the app’s public schema.

The API issues temporary STS credentials for workspace S3 mounts.

  • STS_CREDENTIAL_DURATION_SECONDS (default: 3600) controls the assumed-role session duration.
  • Values above 3600 are rejected.
  • This limit aligns with AWS’s 1-hour role-chaining ceiling for assumed-role sessions.
  • Ensure the IAM role referenced by USER_DATA_ROLE_ARN has a MaxSessionDuration at least as large as this value.
  • Keep local development on the repo defaults in platform/envs/local.env unless you have a clear reason to diverge. The default is DEPLOYMENT_MODE=enterprise, which disables credit billing.
  • If you need SaaS mode, set DEPLOYMENT_MODE=saas explicitly. Stripe settings are then required by the config validator for billing to activate correctly.
  • In Enterprise mode, you can usually disable billing-specific values and focus on auth, storage, and analytics connectivity.
  • If RECAPTCHA_ENABLED=true, both Recaptcha keys must be present.
  • If LITELLM_ENABLED=true, provide LITELLM_MASTER_KEY, keep LITELLM_SALT_KEY stable, and make sure LITELLM_PUBLIC_URL is resolvable from sandboxes.
  • When changing config, update packages/api/app/core/config.py and the matching files in platform/envs/ together so the docs, schema, and runtime stay aligned.