DocsAPI

API Reference

Base Url: http://localhost:8000

Core

GET/

Root endpoint providing API information and service status.

json
{
  "service": "laddr API",
  "version": "0.9.2",
  "status": "running",
  "dashboard": "http://localhost:5173",
  "docs": "/docs"
}

Health

GET/api/health
json
{
  "status": "healthy",
  "queue": true,
  "database": true,
  "version": "x.y.z"
}

Submit Job (legacy)

POST/api/jobs

Content-Type: application/json
{
"pipeline_name": "coordinator",
"inputs": {"topic": "Latest AI agents"}
}

json
{
  "job_id": "...",
  "status": "success",
  "result": {"...": "..."},
  "error": null,
  "duration_ms": 1234,
  "agent": "coordinator"
}

Jobs (Legacy)

Get Job

GET/api/jobs/{job_id}

Get job status and result.

json
{
  "job_id": "uuid",
  "status": "completed",
  "pipeline_name": "analyzer",
  "inputs": {"numbers": [1,2,3,4,5]},
  "outputs": {"sum": 15, "average": 3.0},
  "error": null,
  "created_at": "2025-11-03T16:40:04.630542Z",
  "completed_at": "2025-11-03T16:40:05.234720Z",
  "token_usage": {
    "prompt_tokens": 100,
    "completion_tokens": 50,
    "total_tokens": 150
  }
}

List Jobs

GET/api/jobs?limit=50&offset=0

List recent jobs (offset may be ignored depending on DB).

json
{
  "jobs": [
    {
      "job_id": "uuid",
      "status": "completed",
      "pipeline_name": "writer",
      "created_at": "2025-11-03T16:40:07.310121Z",
      "completed_at": "2025-11-03T16:40:07.324541Z"
    }
  ],
  "limit": 50,
  "offset": 0
}

Replay Job

POST/api/jobs/{job_id}/replay

Replay a specific job.

Content-Type: application/json
{
"reexecute": false
}

json
{
  "job_id": "uuid (new or same)",
  "status": "completed",
  "result": "Job result",
  "replayed": true,
  "original_job_id": "original-uuid"
}

Prompts (Recommended)

POST/api/prompts

Submit a new prompt (non-blocking). Creates a prompt record and executes agent(s) in the background.

Content-Type: application/json
{
"prompt_name": "writer",
"inputs": {"task": "Write a haiku"}
"mode": "single" // or "sequential"
"agents": ["analyzer", "writer"] // optional for sequential
}

json
{
  "prompt_id": "uuid",
  "status": "running",
  "agent": "writer",
  "mode": "single",
  "agents": ["writer"]
}
GET/api/prompts/{prompt_id}

Get prompt execution status and result.

json
{
  "prompt_id": "uuid",
  "status": "completed",
  "prompt_name": "writer",
  "inputs": {"task": "Write a haiku"},
  "outputs": {"result": "Haiku text..."},
  "error": null,
  "created_at": "2025-11-03T16:39:52.113651Z",
  "completed_at": "2025-11-03T16:39:54.234720Z",
  "token_usage": {
    "prompt_tokens": 150,
    "completion_tokens": 75,
    "total_tokens": 225
  }
}
GET/api/prompts?limit=50

List recent prompt executions.

json
{
  "prompts": [
    {
      "prompt_id": "uuid",
      "status": "completed",
      "prompt_name": "writer",
      "created_at": "2025-11-03T16:39:52.113651Z",
      "completed_at": "2025-11-03T16:39:54.234720Z"
    }
  ],
  "limit": 50
}

Agents

GET/api/agents

List registered agents with metadata.

json
{
  "agents": [
    {
      "name": "writer",
      "role": "Content Writer",
      "goal": "Generate high-quality content",
      "status": "active",
      "tools": ["format_json", "parse_csv"],
      "last_seen": "2025-11-03T16:30:00.000000Z"
    }
  ]
}
POST/api/agents/{agent_name}/chat

Send a message to a specific agent. Optionally wait for a synchronous response.

Content-Type: application/json
{
"message": "Write a haiku about testing",
"wait": true,
"timeout": 30
}

json
// wait=true (successful)
{
  "task_id": "uuid",
  "status": "completed",
  "result": "Agent's response text or structured data",
  "agent": "writer"
}

// wait=false (async)
{
  "task_id": "uuid",
  "status": "submitted"
}

// timeout
{
  "task_id": "uuid",
  "status": "timeout",
  "message": "Agent did not respond in time"
}

Traces

GET/api/traces?limit=100&job_id=&agent_name=

List trace events with optional filters (job_id, agent_name).

json
{
  "traces": [
    {
      "id": 510,
      "job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
      "agent_name": "writer",
      "event_type": "task_error",
      "payload": {
        "error": "LLM generation failed: ...",
        "worker": "writer",
        "ended_at": "2025-11-03T16:40:07.323933Z"
      },
      "timestamp": "2025-11-03T16:40:07.324541Z"
    }
  ]
}
GET/api/traces/grouped?limit=50

Get traces grouped by job_id to view full multi-agent runs.

json
{
  "grouped_traces": [
    {
      "job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
      "trace_count": 2,
      "agents": ["writer"],
      "start_time": "2025-11-03T16:40:07.315138Z",
      "end_time": "2025-11-03T16:40:07.324541Z",
      "traces": [
        {
          "id": 509,
          "job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
          "agent_name": "writer",
          "event_type": "task_start",
          "payload": {...},
          "timestamp": "2025-11-03T16:40:07.315138Z"
        },
        {
          "id": 510,
          "job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
          "agent_name": "writer",
          "event_type": "task_error",
          "payload": {...},
          "timestamp": "2025-11-03T16:40:07.324541Z"
        }
      ]
    }
  ]
}
GET/api/traces/{trace_id}

Get a single trace by id.

json
{
  "id": 510,
  "job_id": "d780d103-a860-4536-98a7-1deed7097cb9",
  "agent_name": "writer",
  "event_type": "task_error",
  "payload": {
    "error": "LLM generation failed: openai package not installed",
    "worker": "writer",
    "ended_at": "2025-11-03T16:40:07.323933Z"
  },
  "timestamp": "2025-11-03T16:40:07.324541Z"
}

Response Resolution

GET/api/responses/{task_id}/resolved

Resolve and return a task response. If the payload was offloaded to storage (S3/MinIO), this endpoint fetches the full data.

json
// Inline response
{
  "task_id": "uuid",
  "offloaded": false,
  "pointer": null,
  "data": {
    "result": "Agent response data"
  }
}

// Offloaded response
{
  "task_id": "uuid",
  "offloaded": true,
  "pointer": {
    "bucket": "laddr",
    "key": "responses/task-uuid",
    "size_bytes": 524288
  },
  "data": {
    "result": "Large agent response data..."
  }
}

Metrics

GET/api/metrics

Get aggregated system metrics.

json
{
  "total_jobs": 16,
  "avg_latency_ms": 0,
  "active_agents_count": 5,
  "cache_hits": 0,
  "tool_calls": 0,
  "timestamp": "2025-11-03T16:40:30.429696Z"
}

Error Handling

Common error responses and examples.

422 Unprocessable Entity

json
{
  "detail": [
    {
      "type": "missing",
      "loc": ["body", "prompt_name"],
      "msg": "Field required",
      "input": null
    }
  ]
}

404 Not Found

json
{
  "detail": "Resource not found"
}

500 Internal Server Error

json
{
  "detail": "Detailed error message"
}

503 Service Unavailable

json
{
  "detail": "Services unhealthy: bus=false, db=true"
}

Request / Response Examples

Sequential prompt workflow (example)

bash
# Submit sequential prompt
curl -X POST http://localhost:8000/api/prompts   -H "Content-Type: application/json"   -d '{
    "prompt_name": "data_report",
    "inputs": {
      "data": [100, 200, 300, 400, 500],
      "title": "Sales Analysis"
    },
    "mode": "sequential",
    "agents": ["analyzer", "writer"]
  }' | jq -r '.prompt_id'

# Then poll:
curl http://localhost:8000/api/prompts/<PROMPT_ID>

Agent chat example

bash
curl -X POST http://localhost:8000/api/agents/writer/chat   -H "Content-Type: application/json"   -d '{
    "message": "Write a 3-sentence summary of machine learning",
    "wait": true,
    "timeout": 30
  }'

Resolve async response example

bash
# Retrieve resolved response for a task_id
curl http://localhost:8000/api/responses/<TASK_ID>/resolved