Skip to content

Environment Variables

Enable development mode with in-memory storage. No PostgreSQL required.

PropertyValue
RequiredNo
Defaultfalse

Example:

Terminal window
# Start in dev mode (no database required)
DEV_MODE=true ./target/debug/everruns-server
# Or with 1
DEV_MODE=1 ./target/debug/everruns-server

Notes:

  • When enabled, uses in-memory storage instead of PostgreSQL
  • All data is lost when the server stops
  • gRPC server and worker communication are disabled
  • Stale task reclamation is disabled
  • Useful for quick local development and testing
  • Not suitable for production or multi-instance deployments

Limitations in dev mode:

  • No persistence (data is lost on restart)
  • No worker support (all execution happens in-process)
  • No distributed tracing of worker activities
  • Single-instance only

Deployment environment grade. Controls which features and capabilities are available.

PropertyValue
RequiredNo
Defaultprod (or dev if DEV_MODE=true)

Valid values:

GradeDescription
devDevelopment - all experimental features enabled
pocProof of concept / demo environment
previewPreview/staging environment
prodProduction - only stable features

Example:

Terminal window
# Run in development mode with experimental features
DEPLOYMENT_GRADE=dev ./target/debug/everruns-server
# Production mode (default)
DEPLOYMENT_GRADE=prod ./target/debug/everruns-server

Notes:

  • If not set, falls back to DEV_MODE: if DEV_MODE=true, uses dev; otherwise uses prod
  • Experimental capabilities (e.g., Docker Container) are only available in dev grade
  • Experimental seed agents (e.g., Python Coder) are only created in dev grade
  • Use dev for local development and testing experimental features
  • Use prod for production deployments

Optional prefix for all API routes.

PropertyValue
RequiredNo
DefaultEmpty (no prefix)

Example:

Terminal window
# Routes at /api/v1/agents
API_PREFIX=/api

Notes:

  • /health and /api-doc/openapi.json are not affected by this prefix
  • All API routes including auth (/v1/auth/*) are affected by this prefix
  • OAuth callback URLs automatically include this prefix when using defaults
  • Use when running behind a reverse proxy or API gateway that expects a path prefix

Comma-separated list of allowed origins for cross-origin requests. Only needed when the UI is served from a different domain than the API.

PropertyValue
RequiredNo
DefaultNot set (CORS disabled)

Example:

Terminal window
# Allow requests from a different frontend origin
CORS_ALLOWED_ORIGINS=https://app.example.com
# Multiple origins
CORS_ALLOWED_ORIGINS=https://app.example.com,https://admin.example.com

Notes:

  • Not needed for local development (Caddy reverse proxy handles /api/* requests)
  • Not needed in production if using a reverse proxy on the same domain
  • If set, credentials are allowed (Access-Control-Allow-Credentials: true)
  • Wildcard (*) is not supported when using credentials

LLM provider API keys (OpenAI, Anthropic, Gemini) are primarily stored encrypted in the database and managed via the Settings > Providers UI.

PropertyValue
StorageDatabase (encrypted with AES-256-GCM)
ConfigurationSettings > Providers UI or /v1/llm-providers API
Supported ProvidersOpenAI, Anthropic, Google Gemini

Required for encryption:

The SECRETS_ENCRYPTION_KEY environment variable must be set for the control-plane API to encrypt/decrypt API keys. Workers receive decrypted API keys via gRPC and do not need this variable.

Terminal window
# Generate a new key
python3 -c "import os, base64; print('kek-v1:' + base64.b64encode(os.urandom(32)).decode())"
# Set in environment (control-plane only)
SECRETS_ENCRYPTION_KEY=kek-v1:your-generated-key-here

Default API Keys (Development Convenience)

Section titled “Default API Keys (Development Convenience)”

For development, you can set default API keys via environment variables on the control-plane only. These are used as fallbacks when providers don’t have keys configured in the database.

VariableDescription
DEFAULT_OPENAI_API_KEYFallback API key for OpenAI providers
DEFAULT_ANTHROPIC_API_KEYFallback API key for Anthropic providers
DEFAULT_GEMINI_API_KEYFallback API key for Google Gemini providers

Example:

Terminal window
# Set in .env or environment (control-plane only)
DEFAULT_OPENAI_API_KEY=sk-...
DEFAULT_ANTHROPIC_API_KEY=sk-ant-...
DEFAULT_GEMINI_API_KEY=AIza...

Notes:

  • These variables are only used by the control-plane, not workers
  • Workers receive API keys via gRPC from the control-plane
  • Database-stored keys always take priority over environment variables
  • These are intended for development convenience, not production use
  • The just start-all command automatically sets these from OPENAI_API_KEY, ANTHROPIC_API_KEY, and GEMINI_API_KEY if present
  • If no API key is configured for a provider, LLM calls will fail and users will see an error message in the chat: “I encountered an error while processing your request. Please try again later.”

The UI makes all API requests (including SSE) to /api/* paths. Caddy reverse proxy strips /api and forwards to the backend.

Local Development:

  • Caddy on :9300 routes /api/* to backend at :9000 (strips /api prefix)
  • Example: /api/v1/agentshttp://localhost:9000/v1/agents
  • SSE streaming works via flush_interval -1 in Caddy config
  • No CORS needed (same-origin through Caddy)

Production:

  • Configure your reverse proxy (nginx, Caddy, etc.) to route /api/* to the API server
  • Strip the /api prefix when forwarding
  • Disable response buffering for SSE endpoints
  • Example Caddy config: see local/Caddyfile
VariableDefaultDescription
SSE_REALTIME_CYCLE_SECS300Connection cycle interval for session event streams (seconds)
SSE_MONITORING_CYCLE_SECS600Connection cycle interval for durable monitoring streams (seconds)
SSE_HEARTBEAT_INTERVAL_SECS30Interval between heartbeat comments on all SSE streams (seconds)
SSE_GLOBAL_MAX10000Maximum total SSE connections across all users
SSE_PER_SESSION_MAX5Maximum SSE connections per session
SSE_PER_ORG_MAX1000Maximum SSE connections per organization

Notes:

  • Heartbeat comments (: heartbeat\n\n) are sent on all SSE streams to detect stale connections
  • The heartbeat interval must be less than the SDK read timeout (default: 60s) with safety margin
  • Connection cycling prevents stale connections through proxies and load balancers
  • When running behind HTTP/1.1 proxies, increase SSE_REALTIME_CYCLE_SECS to reduce reconnection frequency

Address of the control-plane gRPC server for worker communication.

PropertyValue
RequiredNo (worker only)
Default127.0.0.1:9001

Example:

Terminal window
WORKER_GRPC_ADDRESS=127.0.0.1:9001

Notes:

  • Workers communicate with the control-plane via gRPC for all database operations
  • The control-plane exposes both HTTP (port 9000) and gRPC (port 9001) interfaces
  • Workers are stateless and do not connect directly to the database

Bearer token for authenticating worker gRPC connections to the control-plane.

PropertyValue
RequiredYes (production); No (dev mode)
DefaultUnset (auth disabled)

Example:

Terminal window
WORKER_GRPC_AUTH_TOKEN=your-secret-token

Notes:

  • Must be set on both the server and all workers (same value)
  • When unset, gRPC auth is disabled (acceptable for local development only)
  • Server panics on startup if unset when not in dev mode

Bind address for the server-side gRPC listener (control-plane only).

PropertyValue
RequiredNo (server only)
Default0.0.0.0:9001

Example:

Terminal window
WORKER_GRPC_ADDR=0.0.0.0:9001

Timeout in seconds for worker initial connection to control-plane gRPC.

PropertyValue
RequiredNo (worker only)
Default30

Example:

Terminal window
WORKER_GRPC_CONNECT_TIMEOUT=60

Everruns supports distributed tracing via OpenTelemetry with OTLP export. Traces follow the Gen-AI semantic conventions for LLM operations.

OTLP endpoint for trace export (e.g., Jaeger, Grafana Tempo, or any OTLP-compatible backend).

PropertyValue
RequiredNo
DefaultNot set (tracing disabled)

Example:

Terminal window
# For local Jaeger
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# For production Tempo
OTEL_EXPORTER_OTLP_ENDPOINT=http://tempo.monitoring:4317

Notes:

  • When set, traces are exported via OTLP/gRPC
  • For local development, Jaeger is included in docker-compose.yml
  • Without this variable, only console logging is enabled

Service name for traces.

PropertyValue
RequiredNo
Defaulteverruns-server (API), everruns-worker (Worker)

Example:

Terminal window
OTEL_SERVICE_NAME=everruns-prod-api

Service version for traces.

PropertyValue
RequiredNo
DefaultCargo package version

Deployment environment label.

PropertyValue
RequiredNo
DefaultNot set

Example:

Terminal window
OTEL_ENVIRONMENT=production

Enable recording of LLM input/output content in traces. Warning: May contain sensitive data.

PropertyValue
RequiredNo
Defaultfalse

Example:

Terminal window
# Standard OTel env var (preferred)
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
# Legacy alias (also works)
OTEL_RECORD_CONTENT=true

Notes:

  • When enabled, gen_ai.input.messages, gen_ai.output.messages, gen_ai.tool.call.arguments, gen_ai.tool.call.result, and thinking content are recorded
  • Disabled by default for privacy and data size concerns
  • Only enable in development or when debugging specific issues

The local/docker-compose.yml includes Jaeger for local trace visualization:

Terminal window
# Start all services including Jaeger
just start
# Set OTLP endpoint for API and Worker
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# View traces at
open http://localhost:16686
PortDescription
4317OTLP gRPC receiver
4318OTLP HTTP receiver
16686Jaeger UI

Traces follow the agentic execution lifecycle with 13 event types:

invoke_agent {turn_id} (root span)
├── reason (LLM reasoning phase)
│ ├── thinking (extended thinking, if enabled)
│ └── chat {model} (LLM API call)
├── act (tool execution phase)
│ ├── execute_tool {name}
│ └── execute_tool {name}
├── reason (iteration 2)
│ └── chat {model}
└── ...

All spans include OpenTelemetry attributes following the Gen-AI semantic conventions:

AttributeSpan TypesDescription
gen_ai.operation.nameAllOperation type (invoke_agent, chat, execute_tool, reason, act, thinking)
gen_ai.systemchatProvider (openai, anthropic, gemini)
gen_ai.request.modelchat, thinkingRequested model name
gen_ai.response.modelchatModel actually used
gen_ai.response.idchatResponse identifier
gen_ai.response.finish_reasonschatWhy generation stopped
gen_ai.usage.input_tokenschat, reason, invoke_agentPrompt tokens used
gen_ai.usage.output_tokenschat, reason, invoke_agentCompletion tokens used
gen_ai.usage.cache_read_tokenschatTokens read from prompt cache
gen_ai.usage.cache_creation_tokenschatTokens written to prompt cache
gen_ai.output.typechattext or tool_calls
gen_ai.conversation.idAllSession identifier
gen_ai.tool.nameexecute_toolTool name
gen_ai.tool.call.idexecute_toolTool call identifier
tool.successexecute_toolWhether tool succeeded
turn.idinvoke_agentTurn identifier
turn.iterationsinvoke_agentNumber of reason/act iterations
error.typeinvoke_agent, chat, execute_toolError description (on failure)
otel.status_codeinvoke_agentERROR on failure/cancellation
duration_msAllSpan duration in milliseconds
time_to_first_token_mschatStreaming latency

Everruns supports sending LLM generation events to Braintrust for observability, evaluation, and logging.

For setup instructions and configuration details, see the Braintrust Integration Guide.

VariableRequiredDefaultDescription
BRAINTRUST_API_KEYYes-API key from Braintrust settings
BRAINTRUST_PROJECT_NAMENoMy ProjectProject name for organizing traces
BRAINTRUST_PROJECT_IDNo-Direct project UUID (skips name lookup)
BRAINTRUST_API_URLNohttps://api.braintrust.devAPI base URL