Scalability
Queue-based architecture with horizontal scaling support.Parallel Workers
Run multiple agent workers in parallel to handle high throughput:Queue Backends
Choose your message queue backend:- Redis - Fast, lightweight (development)
- Kafka - Durable, scalable (production)
- Memory - In-memory (testing)
Load Distribution
Kafka automatically distributes tasks across workers using consumer groups:Observability
Complete visibility into agent execution with traces, metrics, and dashboards.Real-Time Dashboard
Web-based dashboard athttp://localhost:5173:
- Agent Traces - See every execution step
- Token Usage - Track LLM costs
- System Metrics - Monitor performance
- Job History - Review past executions
Structured Traces
Every agent execution is traced:Logging
Comprehensive logging at multiple levels:Extensibility
Connect your own tools, APIs, and models with full control.Custom Tools
Create tools with the@tool decorator:
MCP Integration
Connect to Model Context Protocol servers:System Tool Overrides
Customize delegation and storage behavior:Configurability
Flexible configuration for every component.Storage Backends
Choose your storage backend:- MinIO - S3-compatible object storage
- S3 - AWS S3
- Local - File system (development)
Database Options
- PostgreSQL - Production database
- SQLite - Local development
LLM Providers
Support for multiple LLM providers:- OpenAI - GPT-4, GPT-3.5
- Anthropic - Claude
- Google - Gemini
- Ollama - Local models
APIs
REST API for integration with your systems.Submit Jobs
Get Results
Health Check
Agent Orchestration
Coordinate multiple agents working together.Task Delegation
Agents can delegate tasks to other agents:Parallel Execution
Run multiple tasks in parallel:Workflow Patterns
Error Handling
Built-in retry logic and error recovery.Automatic Retries
Error Tracking
All errors are logged and traced:Next Steps
- Installation - Set up Laddr
- First Agent - Build your first agent
- Agent Configuration - Configure agents
- Scaling & Operations - Production deployment