Learn how to scale Laddr agents horizontally and deploy to production environments.Documentation Index
Fetch the complete documentation index at: https://laddr.agnetlabs.com/llms.txt
Use this file to discover all available pages before exploring further.
Horizontal Scaling
Scale Workers
Scale agent workers to handle increased load:Docker Compose Scaling
Scale workers using Docker Compose:Queue Backends
Redis (Development)
Fast, lightweight queue backend for development:Kafka (Production)
Durable, scalable queue backend for production:Memory (Testing)
In-memory queue for local testing:Database Configuration
PostgreSQL (Production)
Use PostgreSQL for production deployments:SQLite (Development)
SQLite for local development:Monitoring
Dashboard
Access the dashboard for real-time monitoring:Metrics
Monitor key metrics:- Queue Depth - Number of pending tasks
- Worker Utilization - Active workers vs idle
- Throughput - Tasks processed per second
- Error Rate - Failed tasks percentage
- Latency - Average task completion time
Logs
View and follow logs:Production Deployment
Environment Variables
Configure production environment:Health Checks
Implement health checks:Resource Limits
Set appropriate resource limits:Load Balancing
Worker Distribution
Kafka automatically distributes tasks across workers: Each worker in a consumer group processes a subset of tasks.Partition Strategy
Configure Kafka partitions for better parallelism:Performance Tuning
Worker Configuration
Optimize worker settings:Database Connection Pooling
Configure connection pooling:Troubleshooting
High Queue Depth
If queue depth is growing:- Scale up workers:
laddr scale researcher 10 - Check worker logs for errors
- Verify database/storage connectivity
- Check for slow tools or LLM calls
Worker Failures
If workers are failing:- Check logs:
laddr logs researcher --tail 100 - Verify API keys and credentials
- Check resource limits (CPU/memory)
- Review error messages in dashboard
Performance Issues
If performance is slow:- Monitor dashboard metrics
- Check database query performance
- Review LLM response times
- Optimize tool implementations
- Consider caching strategies
Next Steps
- Local Runtime - Local development
- Storage & Artifacts - Configure storage
- Agent Configuration - Configure agents