openinference
openinference copied to clipboard
Auto-Instrumentation for AI Observability
This casing does not agree with the OpenInference spec, or the enum from Python. References: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md?plain=1#L43 https://github.com/Arize-ai/openinference/blob/main/python/openinference-semantic-conventions/src/openinference/semconv/trace/__init__.py#L233-L241
Showcase the new instrumentation-based observability by taking llama-index's notebook https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/instrumentation/instrumentation_observability_rundown.ipynb
Right now we only get the tool description and params. We should capture the IO
https://platform.openai.com/docs/guides/function-calling
**Describe the bug** I have an instance of Arize Phoenix running in a Docker container. I've been using the [instrument.py](https://github.com/Arize-ai/openinference/blob/main/python/examples/llama-index/backend/instrument.py) example for tracing previously, and there were no problems. Today,...
Example here: https://github.com/albertpurnama/nextjs-phoenix Issues are that nextjs server side code cannot use commonjs and `next/OTEL` is still under experimental
Okay, now that I was able to get the Python version of LangChain instrumentation working, I wanted to get the same working for TypeScript following this https://docs.arize.com/arize/large-language-models/tracing/auto-instrumentation/langchain. I am testing...
We should use `.instrument(tracer_provider=tracer_provider)` in our examples, instead of `trace_api.set_tracer_provider(tracer_provider)` because users may be working in a (production) system where there's an already an incumbent OTEL tracer provider (and exporter)....