โšก 2-minute setup

Get Started with AgentMolt

Add observability, cost control, and governance to your AI agents in 3 steps. No infrastructure changes required.

1

Choose your framework

AgentMolt works with any AI framework. Pick yours for tailored setup.

2

Install & configure

Install the SDK and add your API token.

bash
pip install agentmolt
python
import agentmolt
from openai import OpenAI

# Initialize AgentMolt (auto-patches OpenAI)
agentmolt.init(
    api_token="your-api-token",
    api_url="https://agent-control-panel-production.up.railway.app/api/v1"
)

# Use OpenAI as normal โ€” AgentMolt tracks everything
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

# Check your dashboard โ€” you'll see the agent, costs, and trace!
3

Verify connection

Run your agent and check that data flows to AgentMolt.

bash
# Run your script
python my_agent.py

# Or verify with the CLI
agentmolt doctor

What AgentMolt tracks automatically

๐Ÿ“Š
Every LLM Call
Tokens, cost, latency, model โ€” per agent, per request
๐Ÿ”ง
Tool Usage
Which tools each agent calls, success/fail rates, timing
๐Ÿ’ฐ
Cost in Real-Time
Per-agent spend with budgets, alerts, and kill switches
๐Ÿ›ก๏ธ
Security Events
Policy violations, blocked actions, audit trail
๐Ÿ“ˆ
Behavioral Drift
Detect when agent behavior changes from baseline
๐ŸŽฏ
SLO Compliance
Track error budgets and burn rates per agent
๐Ÿ”
Full Traces
Waterfall view of every step in multi-agent workflows
โšก
Live Telemetry
Real-time streaming metrics dashboard

Need help? SDK Docs ยท Discord Community ยท API Reference