agentops
agentops copied to clipboard
Python SDK for agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks like CrewAI, Langchain, and Autogen

AI agents suck. Weโre fixing that.
๐ฆ Twitter ย ย โขย ย ๐ข Discord ย ย โขย ย ๐๏ธ AgentOps ย ย โขย ย ๐ Documentation
AgentOps
Build your next agent with benchmarks, observability, and replay analytics. AgentOps is the toolkit for evaluating and developing robust and reliable AI agents.
AgentOps is open beta. You can sign up for AgentOps here.
Quick Start โจ๏ธ
pip install agentops
Session replays in 3 lines of code
Initialize the AgentOps client, and automatically get analytics on every LLM call.
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
ao_client = agentops.Client(<INSERT YOUR API KEY HERE>)
...
# (optional: record specific functions)
@record_function('sample function being record')
def sample_function(...):
...
# End of program
ao_client.end_session('Success')
# Woohoo You're done ๐
Refer to our API documentation for detailed instructions.
Time travel debugging ๐ฎ
(coming soon!)
Agent Arena ๐ฅ
(coming soon!)
Evaluations Roadmap ๐งญ
Platform | Dashboard | Evals |
---|---|---|
โ Python SDK | โ Multi-session and Cross-session metrics | โ Custom eval metrics |
๐ง Evaluation builder API | โ Custom event tag trackingย | ๐ Agent scorecards |
โ Javascript/Typescript SDK | โ Session replays | ๐ Evaluation playground + leaderboard |
Debugging Roadmap ๐งญ
Performance testing | Environments | LLM Testing | Reasoning and execution testing |
---|---|---|---|
โ Event latency analysis | ๐ Non-stationary environment testing | ๐ LLM non-deterministic function detection | ๐ง Infinite loops and recursive thought detection |
โ Agent workflow execution pricing | ๐ Multi-modal environments | ๐ง Token limit overflow flags | ๐ Faulty reasoning detection |
๐ง Success validators (external) | ๐ Execution containers | ๐ Context limit overflow flags | ๐ Generative code validators |
๐ Agent controllers/skill tests | โ Honeypot and prompt injection detection (PromptArmor) | ๐ API bill tracking | ๐ Error breakpoint analysis |
๐ Information context constraint testing | ๐ Anti-agent roadblocks (i.e. Captchas) | ๐ CI/CD integration checks | |
๐ Regression testing | ๐ Multi-agent framework visualization |
Callback handlers โฉ๏ธ
Langchain
AgentOps works seemlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:
pip install agentops[langchain]
To use the handler, import and set
import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.langchain_callback_handler import LangchainCallbackHandler
AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
callbacks=[handler],
model='gpt-3.5-turbo')
agent = initialize_agent(tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
callbacks=[handler], # You must pass in a callback handler to record your agent
handle_parsing_errors=True)
Check out the Langchain Examples Notebook for more details including Async handlers.
LlamaIndex
(Coming Soon)
Why AgentOps? ๐ค
Our mission is to bring your agent from protype to production.
Agent developers often work with little to no visibility into agent testing performance. This means their agents never leave the lab. We're changing that.
AgentOps is the easiest way to evaluate, grade, and test agents. Is there a feature you'd like to see AgentOps cover? Just raise it in the issues tab, and we'll work on adding it to the roadmap.