openllmetry
openllmetry copied to clipboard
🐛 Bug Report: Requirements for Langchain example
Which component is this bug for?
Langchain Instrumentation
📜 Description
When using the default pyproject.toml generated by
langchain app new ...
Langchain instrumentation does not occur.
👟 Reproduction steps
''' langchain app new chat '''
Results in the following pyproject.toml
[tool.poetry]
name = "chat"
version = "0.1.0"
description = ""
authors = ["Your Name <[email protected]>"]
readme = "README.md"
packages = [
{ include = "app" },
]
[tool.poetry.dependencies]
python = "^3.11"
uvicorn = "^0.23.2"
langserve = {extras = ["server"], version = ">=0.0.30"}
pydantic = "<2"
[tool.poetry.group.dev.dependencies]
langchain-cli = ">=0.0.15"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
👍 Expected behavior
The docs should describe what else needs to be added to show full instrumentation for a langchain example.
👎 Actual Behavior with Screenshots
This results in only the LLM being traced.
🤖 Python Version
3.11
📃 Provide any additional context for the Bug.
It looks like the following modules need to be added to get the following trace.
langchain
opentelemetry-instrumentation-fastapi
In the langchain example described here - https://github.com/traceloop/openllmetry/pull/1043 langchain is explicitly added, and opentelemetry-instrumentation-fastapi is pulled in by chromadb.
👀 Have you spent some time to check if this bug has been raised before?
- [X] I checked and didn't find similar issue
Are you willing to submit PR?
None
@nirga let me know if you need anything else?
Also should I see the fastapi endpoint /invoke in the trace?
Thanks, Damian.
Thanks @damianoneill! Will try to reproduce this.
For the fastapi - you'll need to add the FastAPI instrumentation. Are you using our SDK?
Morning @nirga I'm not sure what you mean about the SDK, is there something other than below that I should be doing?
try:
Traceloop.init(
app_name="Langchain Chatbot Application",
api_endpoint="http://localhost:4318", # http endpoint for opentelemetry (jaeger) collector
disable_batch=True,
)
except Exception as e: # pylint: disable=broad-except
logger.error("Failed to initialize Traceloop: %s", e)
No, I meant that we don't instrument FastAPI currently, so you should do it yourself after initializing Traceloop. This is really easy:
import fastapi
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from traceloop.sdk import Traceloop
app = fastapi.FastAPI()
try:
Traceloop.init(
app_name="Langchain Chatbot Application",
api_endpoint="http://localhost:4318", # http endpoint for opentelemetry (jaeger) collector
disable_batch=True,
)
FastAPIInstrumentor.instrument_app(app)
except Exception as e: # pylint: disable=broad-except
logger.error("Failed to initialize Traceloop: %s", e)
@app.get("/foobar")
async def foobar():
return {"message": "hello world"}
Hey @nirga ,
I started a local docker instance of jaegertracing/all-in-one:1.57 to check out openllmetry locally and I get Failed to export batch code: 404, reason: 404 page not found with the configuration suggested above.
I init Traceloop as you mentioned above:
Traceloop.init(
app_name="Langchain Chatbot Application",
api_endpoint="http://localhost:4318",
disable_batch=True,
)
I also tried _api_endpoint="http://localhost:4318/v1/traces", same result. (posting directly to Jaeger works)
I wonder if there's a way to debug Traceloop as im not sure it's sending the correct http call.
Best, Asaf.
Hey @asaf, I'm going to delete this comment to avoid clutter in this issue as this is not the place to report new unrelated issues. I would love to assist you - you're welcome to send a message on slack, or open a separate issue / discussion here on GitHub.
Thanks!
Thanks @nirga since this is not a feature / bug rather more of a question i'll convert this into a discussion - thanks.