Introduction
Everruns is a durable agentic harness engine built on Rust with a PostgreSQL-backed durable execution engine. It provides APIs for managing agents, sessions, and runs with streaming event output via SSE.
Overview
Section titled “Overview”Everruns enables you to build reliable AI agents that can:
- Execute long-running tasks with durability guarantees
- Stream real-time events to clients
- Manage conversations through sessions
- Extend agent capabilities with modular tools
Key Concepts
Section titled “Key Concepts”Agents
Section titled “Agents”Agents are AI assistants with configurable system prompts and capabilities. Each agent can be customized with:
- A system prompt that defines its behavior
- A set of capabilities that provide tools
- Model configuration for the underlying LLM
Sessions
Section titled “Sessions”Sessions represent conversations with an agent. Each session maintains:
- Conversation history
- Current execution state
- Configuration overrides
Capabilities
Section titled “Capabilities”Capabilities are modular functionality units that extend agent behavior. They can:
- Add instructions to the system prompt
- Provide tools for the agent to use
- Modify execution behavior
See Capabilities for more details.
Getting Started
Section titled “Getting Started”Quick Start
Section titled “Quick Start”- Deploy Everruns using the provided Docker images
- Configure your LLM providers via the Settings UI
- Create an agent with your desired configuration
- Start sessions and interact through the API or UI
API Access
Section titled “API Access”The API is available at your deployment URL:
- API Base:
https://your-domain.com/api/v1/ - OpenAPI Spec:
https://your-domain.com/api-doc/openapi.json
Architecture
Section titled “Architecture”Everruns uses a layered architecture:
- API Layer: HTTP endpoints (axum), SSE streaming
- Core Layer: Agent abstractions, capabilities, tools
- Worker Layer: Durable workflows for reliable execution
- Storage Layer: PostgreSQL with encrypted secrets and durable execution state
For detailed architecture information, see the GitHub repository.