fix(smolagents): fix missing llm input messages in smolagents
Instrumentation of smolagents are not handling the LLM input messages properly anymore. This PR should fix it.
Hey @andstor, I am not able to reproduce the issue you were hitting on the latest version of the instrumentation and smolagents==1.21.2. Perhaps the issue has been resolved?
Hey @andstor, I am not able to reproduce the issue you were hitting on the latest version of the instrumentation and
smolagents==1.21.2. Perhaps the issue has been resolved?
Hi @axiomofjoy, I just did the same check and the problem persists. I opened an issue to showcase the problem https://github.com/Arize-ai/openinference/issues/2125.
Thanks @andstor. Do you have a small code snippet to reproduce the issue?
Hey @andstor, it looks like this change causes our existing TestModels tests to break. The issue seems to be that the models used in those tests have messages in dictionary format rather than the smolagents.models.ChatMessage format in the snippet you sent here.
I think it will be necessary to support both formats, and tests are needed.
Hi @andstor ! let us know if you want any help getting this across the line. Thanks for your contribution!