This notebook shows the smallest useful Everruns SDK flow: create an agent, start a Generic harness session, send one message, and stream the reply inline.
It defaults to https://app.everruns.com/api. Override EVERRUNS_API_URL only when you want to point the same notebook at a local or self-hosted Everruns deployment.
Install the SDK
Run this cell once in the notebook environment.
%pip install -q everruns-sdk
Configure the client
Set EVERRUNS_API_KEY before running against app.everruns.com. For local dev mode you can point EVERRUNS_API_URL at your local server and use dev as the key.
import os
import uuid
from everruns_sdk import Everruns
BASE_URL = os.environ.get("EVERRUNS_API_URL", "https://app.everruns.com/api")
API_KEY = os.environ.get("EVERRUNS_API_KEY", "")
if BASE_URL.startswith("https://app.everruns.com") and not API_KEY:
raise RuntimeError("Set EVERRUNS_API_KEY before running this notebook against app.everruns.com.")
if not API_KEY:
API_KEY = "dev"
client = Everruns(api_key=API_KEY, base_url=BASE_URL)
Create the agent
This uses the server's default model. The unique suffix keeps the notebook rerunnable.
run_suffix = uuid.uuid4().hex[:8]
agent = await client.agents.create(
name=f"sdk-notebook-{run_suffix}",
system_prompt=(
"You are a concise demo assistant. "
"Answer in short bullet points and keep responses short."
),
)
agent.id
Start a session
A Generic harness session is the quickest way to run the agent without adding extra capabilities or files.
session = await client.sessions.create(
harness_name="generic",
agent_id=agent.id,
title=f"SDK notebook demo {run_suffix}",
)
session.id
Send a message and stream the reply
This cell sends one message, prints the streamed text as it arrives, and returns a small result object with the IDs you might want to keep.
await client.messages.create(
session.id,
"What does Everruns do, and why would a product team use it?",
)
streamed_chunks = []
completed_text = None
async for event in client.events.stream(session.id):
if event.type == "output.message.delta":
delta = event.data.get("delta", "")
streamed_chunks.append(delta)
print(delta, end="", flush=True)
elif event.type == "output.message.completed":
message = event.data.get("message", {})
completed_text = "\n".join(
part.get("text", "")
for part in message.get("content", [])
if part.get("type") == "text"
).strip()
elif event.type == "turn.completed":
break
elif event.type == "turn.failed":
raise RuntimeError(event.data.get("error", "turn failed"))
result = {
"base_url": BASE_URL,
"agent_id": agent.id,
"session_id": session.id,
"response": "".join(streamed_chunks).strip() or completed_text or "",
}
result
- Everruns runs durable agent workflows behind a stable API and session model.
- It keeps agent state, tool execution, and event streams together so retries stay predictable.
- Product teams use it when they want streaming, durable agents without rebuilding the control plane themselves.
{'base_url': 'https://app.everruns.com/api',
'agent_id': 'agent_0196831a1f9c7a0f8a5c9ab3d7d0f201',
'session_id': 'session_0196831a20837ddbb05fd8f6d0c8f64a',
'response': '- Everruns runs durable agent workflows behind a stable API and session model.\n- It keeps agent state, tool execution, and event streams together so retries stay predictable.\n- Product teams use it when they want streaming, durable agents without rebuilding the control plane themselves.'}
Next steps
- Change the prompt and rerun the last three cells
- Swap the message text for your own task
- Move the same flow into a script or service once the notebook feels right