API Reference
Base Url: http://localhost:8000
Core
/Root endpoint providing API information and service status.
{
"service": "laddr API",
"version": "0.9.2",
"status": "running",
"dashboard": "http://localhost:5173",
"docs": "/docs"
}
Health
/api/health{
"status": "healthy",
"queue": true,
"database": true,
"version": "x.y.z"
}
Submit Job (legacy)
/api/jobsContent-Type: application/json
{
"pipeline_name": "coordinator",
"inputs": {"topic": "Latest AI agents"}
}
{
"job_id": "...",
"status": "success",
"result": {"...": "..."},
"error": null,
"duration_ms": 1234,
"agent": "coordinator"
}
Jobs (Legacy)
Get Job
/api/jobs/{job_id}Get job status and result.
{
"job_id": "uuid",
"status": "completed",
"pipeline_name": "analyzer",
"inputs": {"numbers": [1,2,3,4,5]},
"outputs": {"sum": 15, "average": 3.0},
"error": null,
"created_at": "2025-11-03T16:40:04.630542Z",
"completed_at": "2025-11-03T16:40:05.234720Z",
"token_usage": {
"prompt_tokens": 100,
"completion_tokens": 50,
"total_tokens": 150
}
}
List Jobs
/api/jobs?limit=50&offset=0List recent jobs (offset may be ignored depending on DB).
{
"jobs": [
{
"job_id": "uuid",
"status": "completed",
"pipeline_name": "writer",
"created_at": "2025-11-03T16:40:07.310121Z",
"completed_at": "2025-11-03T16:40:07.324541Z"
}
],
"limit": 50,
"offset": 0
}
Replay Job
/api/jobs/{job_id}/replayReplay a specific job.
Content-Type: application/json
{
"reexecute": false
}
{
"job_id": "uuid (new or same)",
"status": "completed",
"result": "Job result",
"replayed": true,
"original_job_id": "original-uuid"
}
Prompts (Recommended)
/api/promptsSubmit a new prompt (non-blocking). Creates a prompt record and executes agent(s) in the background.
Content-Type: application/json
{
"prompt_name": "writer",
"inputs": {"task": "Write a haiku"}
"mode": "single" // or "sequential"
"agents": ["analyzer", "writer"] // optional for sequential
}
{
"prompt_id": "uuid",
"status": "running",
"agent": "writer",
"mode": "single",
"agents": ["writer"]
}
/api/prompts/{prompt_id}Get prompt execution status and result.
{
"prompt_id": "uuid",
"status": "completed",
"prompt_name": "writer",
"inputs": {"task": "Write a haiku"},
"outputs": {"result": "Haiku text..."},
"error": null,
"created_at": "2025-11-03T16:39:52.113651Z",
"completed_at": "2025-11-03T16:39:54.234720Z",
"token_usage": {
"prompt_tokens": 150,
"completion_tokens": 75,
"total_tokens": 225
}
}
/api/prompts?limit=50List recent prompt executions.
{
"prompts": [
{
"prompt_id": "uuid",
"status": "completed",
"prompt_name": "writer",
"created_at": "2025-11-03T16:39:52.113651Z",
"completed_at": "2025-11-03T16:39:54.234720Z"
}
],
"limit": 50
}
Agents
/api/agentsList registered agents with metadata.
{
"agents": [
{
"name": "writer",
"role": "Content Writer",
"goal": "Generate high-quality content",
"status": "active",
"tools": ["format_json", "parse_csv"],
"last_seen": "2025-11-03T16:30:00.000000Z"
}
]
}
/api/agents/{agent_name}/chatSend a message to a specific agent. Optionally wait for a synchronous response.
Content-Type: application/json
{
"message": "Write a haiku about testing",
"wait": true,
"timeout": 30
}
// wait=true (successful)
{
"task_id": "uuid",
"status": "completed",
"result": "Agent's response text or structured data",
"agent": "writer"
}
// wait=false (async)
{
"task_id": "uuid",
"status": "submitted"
}
// timeout
{
"task_id": "uuid",
"status": "timeout",
"message": "Agent did not respond in time"
}
Traces
/api/traces?limit=100&job_id=&agent_name=List trace events with optional filters (job_id, agent_name).
{
"traces": [
{
"id": 510,
"job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
"agent_name": "writer",
"event_type": "task_error",
"payload": {
"error": "LLM generation failed: ...",
"worker": "writer",
"ended_at": "2025-11-03T16:40:07.323933Z"
},
"timestamp": "2025-11-03T16:40:07.324541Z"
}
]
}
/api/traces/grouped?limit=50Get traces grouped by job_id to view full multi-agent runs.
{
"grouped_traces": [
{
"job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
"trace_count": 2,
"agents": ["writer"],
"start_time": "2025-11-03T16:40:07.315138Z",
"end_time": "2025-11-03T16:40:07.324541Z",
"traces": [
{
"id": 509,
"job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
"agent_name": "writer",
"event_type": "task_start",
"payload": {...},
"timestamp": "2025-11-03T16:40:07.315138Z"
},
{
"id": 510,
"job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
"agent_name": "writer",
"event_type": "task_error",
"payload": {...},
"timestamp": "2025-11-03T16:40:07.324541Z"
}
]
}
]
}
/api/traces/{trace_id}Get a single trace by id.
{
"id": 510,
"job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
"agent_name": "writer",
"event_type": "task_error",
"payload": {
"error": "LLM generation failed: openai package not installed",
"worker": "writer",
"ended_at": "2025-11-03T16:40:07.323933Z"
},
"timestamp": "2025-11-03T16:40:07.324541Z"
}
Response Resolution
/api/responses/{task_id}/resolvedResolve and return a task response. If the payload was offloaded to storage (S3/MinIO), this endpoint fetches the full data.
// Inline response
{
"task_id": "uuid",
"offloaded": false,
"pointer": null,
"data": {
"result": "Agent response data"
}
}
// Offloaded response
{
"task_id": "uuid",
"offloaded": true,
"pointer": {
"bucket": "laddr",
"key": "responses/task-uuid",
"size_bytes": 524288
},
"data": {
"result": "Large agent response data..."
}
}
Metrics
/api/metricsGet aggregated system metrics.
{
"total_jobs": 16,
"avg_latency_ms": 0,
"active_agents_count": 5,
"cache_hits": 0,
"tool_calls": 0,
"timestamp": "2025-11-03T16:40:30.429696Z"
}
Error Handling
Common error responses and examples.
422 Unprocessable Entity
{
"detail": [
{
"type": "missing",
"loc": ["body", "prompt_name"],
"msg": "Field required",
"input": null
}
]
}
404 Not Found
{
"detail": "Resource not found"
}
500 Internal Server Error
{
"detail": "Detailed error message"
}
503 Service Unavailable
{
"detail": "Services unhealthy: bus=false, db=true"
}
Request / Response Examples
Sequential prompt workflow (example)
# Submit sequential prompt
curl -X POST http://localhost:8000/api/prompts -H "Content-Type: application/json" -d '{
"prompt_name": "data_report",
"inputs": {
"data": [100, 200, 300, 400, 500],
"title": "Sales Analysis"
},
"mode": "sequential",
"agents": ["analyzer", "writer"]
}' | jq -r '.prompt_id'
# Then poll:
curl http://localhost:8000/api/prompts/<PROMPT_ID>
Agent chat example
curl -X POST http://localhost:8000/api/agents/writer/chat -H "Content-Type: application/json" -d '{
"message": "Write a 3-sentence summary of machine learning",
"wait": true,
"timeout": 30
}'
Resolve async response example
# Retrieve resolved response for a task_id
curl http://localhost:8000/api/responses/<TASK_ID>/resolved