Docsconfig

Configuration of Agent

This guide explains how to configure and run your own agents using the laddr framework — including environment setup, LLM configuration, execution control, and runtime behavior.

Agent Definition Structure

Every agent (e.g. researcher.py, writer.py, analyzer.py) defines a specialized worker that connects to laddr’s orchestration layer.

Standard structure:

python
from __future__ import annotations
import asyncio
import os
from dotenv import load_dotenv
from laddr import Agent, WorkerRunner
from laddr.llms import openai
from tools.web_tools import web_search, scrape_url, extract_links

load_dotenv()

TOOLS = [web_search, scrape_url, extract_links]

researcher = Agent(
    name="researcher",
    role="Web Research Specialist",
    goal="Search the web and summarize findings",
    backstory="You are a research-focused agent who finds accurate data online.",
    llm=openai(model="gpt-4o-mini", temperature=0.0),
    tools=TOOLS,
    max_retries=1,
    max_iterations=3,
    max_tool_calls=2,
    timeout=45,
    trace_enabled=True,
)

async def main():
    runner = WorkerRunner(agent=researcher)
    print("Starting researcher worker...")
    await runner.start()

if __name__ == "__main__":
    asyncio.run(main())

Required Sections:

  1. Import dependencies
  2. Load .env configuration
  3. Register tools
  4. Define the agent configuration
  5. Create an async entry point (WorkerRunner)

Environment Configuration

Agents use environment variables to control behavior, models, and external service keys.

bash
# Queue and Database
QUEUE_BACKEND=redis
REDIS_URL=redis://localhost:6379
DB_BACKEND=sqlite
DATABASE_URL=sqlite:///./laddr.db

# LLM Configuration
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o-mini
LLM_TEMPERATURE=0.0
OPENAI_API_KEY=sk-...

# Optional: per-agent models
RESEARCHER_MODEL=gpt-4o-mini
WRITER_MODEL=claude-3-5-sonnet-20241022
ANALYZER_MODEL=gemini-1.5-pro

# Tool API keys
SERPER_API_KEY=...

If no database service is available, laddr defaults to SQLite:

bash
DB_BACKEND=sqlite
DATABASE_URL=sqlite:///./laddr.db

LLM Configuration

ProviderExampleBest For
OpenAIopenai(model="gpt-4o-mini")Balanced reasoning, tool use
Geminigemini(model="gemini-1.5-pro")Long context, large docs
Anthropicanthropic(model="claude-3-5-sonnet")Creative writing, deep reasoning
Groqgroq(model="llama-3.3-70b-versatile")Speed and cost-efficiency
xAI Grokgrok(model="grok-beta")Real-time and social data

Example

python
from laddr.llms import openai
llm = openai(model="gpt-4o-mini", temperature=0.0)

Temperature Recommendations:

Agent TypeTemperature
Research/Analysis0.0
Creative Writing0.7–0.9
Support/Dialogue0.3–0.5

Core Agent Parameters

ParameterTypeDescriptionDefault
namestrUnique agent identifierRequired
rolestrRole description for LLMRequired
goalstrTask objectiveRequired
backstorystrContext for consistent behaviorRequired
llmobjectLLM configurationRequired
toolslistRegistered functions[]
max_retriesintRetry count for failed runs1
max_iterationsintReasoning loops before stop3
max_tool_callsintTool call limit per task2
timeoutintMax runtime in seconds45
trace_enabledboolEnables trace storageTrue
trace_masklistRedacted fields in traces[]

Execution Control

Control runtime behavior and reliability using the following execution parameters.

ParameterPurposeTypical ValueNotes
max_retriesRetry on transient errors (LLM/API)1–2Keeps reliability balanced
max_iterationsLLM reasoning loops3Prevents infinite loops
max_tool_callsTool invocations allowed2Limits API usage
timeoutTotal time before abort45sGraceful termination

Tool Configuration

Tools extend agent capabilities. Each tool is a decorated Python function that exposes structured parameters.

python
@tool(
    name="web_search",
    description="Search the web for information",
    parameters={
        "type": "object",
        "properties": {
            "query": {"type": "string"},
            "max_results": {"type": "integer", "default": 5}
        },
        "required": ["query"]
    }
)
def web_search(query: str, max_results: int = 5) -> Dict:
    # Implementation
    ...

Usage Flow:

  • The LLM decides when to call tools
  • Tool calls and outputs are logged in traces
  • Each tool call consumes one max_tool_calls slot

Example TOOLS list:

python
trace_enabled=True
trace_mask=["api_key", "tool_result"]

Tracing Configuration

Tracing logs all agent activity to the database for debugging and audit.

python
trace_enabled=True
trace_mask=["api_key", "tool_result"]

Logged Events:

  • Task start/end
  • LLM reasoning steps
  • Tool invocations
  • Errors or timeouts

View traces:

bash
sqlite3 laddr.db "SELECT * FROM traces ORDER BY id DESC LIMIT 5;"

Best Practices:

  • Always enable tracing in production
  • Use masking for sensitive data
  • Rotate old traces periodically

Worker Runtime

Each agent runs as an independent worker process.

python
async def main():
    runner = WorkerRunner(agent=researcher)
    print("Starting researcher worker...")
    await runner.start()

if __name__ == "__main__":
    asyncio.run(main())

Run Locally:

bash
python agents/researcher.py

Docker Compose example:

yaml
services:
  researcher_worker:
    image: laddr
    command: python agents/researcher.py
    environment:
      - REDIS_URL=redis://redis:6379
      - DB_BACKEND=sqlite
      - DATABASE_URL=sqlite:///./laddr.db

Development

python
max_retries=0
max_iterations=5
max_tool_calls=5
timeout=120
trace_enabled=True

Production

bash
max_retries=2
max_iterations=3
max_tool_calls=2
timeout=45
trace_enabled=True
trace_mask=["api_key"]

Monitoring & Debugging

CheckCommand
Active queuesredis-cli XLEN laddr:tasks:researcher
Worker registeredredis-cli HGETALL laddr:agents
Recent tracessqlite3 laddr.db "SELECT * FROM traces ORDER BY id DESC LIMIT 10;"
Docker logsdocker compose logs researcher_worker -f
Timeout symptoms: task ends early → increase timeout
Retry symptoms: repeated failures → raise max_retries
Incomplete results: too few iterations → raise max_iterations

Troubleshooting Quick Reference

ProblemLikely CauseFix
Agent not startingMissing laddr packagepip install -e .
LLM errorsMissing API keySet OPENAI_API_KEY or provider key
Tool calls failingMissing SERPER_API_KEYAdd to .env
Frequent timeoutsSlow APIsIncrease timeout or reduce tool calls
Empty tracesTracing disabledSet trace_enabled=True
Delegation failsNo workers runningStart via python agents/<name>.py

Agent Extension Example

python
from laddr import Agent
from laddr.llms import openai
from tools.math_tools import calculate

analyzer = Agent(
    name="analyzer",
    role="Data Analyst",
    goal="Perform numerical analysis and return structured data",
    llm=openai(model="gpt-4o-mini", temperature=0.0),
    tools=[calculate],
    max_iterations=2,
    max_tool_calls=1,
    timeout=30,
    instructions="Use the calculate tool and return results as JSON.",
)

Run

bash
python agents/analyzer.py

Quick Reference

python
Agent(
  name="researcher",
  llm=openai(model="gpt-4o-mini"),
  tools=[web_search],
  max_retries=1,
  max_iterations=3,
  max_tool_calls=2,
  timeout=45,
  trace_enabled=True
)

Environment Defaults:

bash
DB_BACKEND=sqlite
DATABASE_URL=sqlite:///./laddr.db
QUEUE_BACKEND=redis
REDIS_URL=redis://redis:6379