Latest context breakdown
GET /v1/sessions/{session_id}/context-report
GET
/v1/sessions/{session_id}/context-report
Parameters
Section titled “ Parameters ”Path Parameters
Section titled “Path Parameters ” session_id
required
string
Session ID (prefixed, e.g., session_…)
Responses
Section titled “ Responses ”Session context report
object
context_window_tokens
integer | null format: int32
contributions
required
Array<object>
object
label
required
string
section_key
required
string
source_id
required
string
tokens
required
integer format: int32
cumulative_usage
One of:
null
Token usage statistics
Tracks token consumption per LLM call including cache tokens for cost optimization. Cache tokens are provider-specific:
- OpenAI:
cache_read_tokensfrom prompt_tokens_details.cached_tokens - Anthropic:
cache_read_tokensfrom cache_read_input_tokens,cache_creation_tokensfrom cache_creation_input_tokens
object
cache_creation_tokens
Number of tokens written to cache (Anthropic-specific)
integer | null format: int32
cache_read_tokens
Number of tokens read from cache (reduces cost)
integer | null format: int32
input_tokens
required
Number of input/prompt tokens
integer format: int32
output_tokens
required
Number of output/completion tokens
integer format: int32
estimated_input_tokens
required
integer format: int32
model
required
string
sections
required
Array<object>
object
items
required
integer format: int32
key
required
string
label
required
string
tokens
required
integer format: int32
session_id
required
string
Invalid session ID
Session not found
Internal server error