haystack
haystack copied to clipboard
feat: LLM-based evaluators return meta info from OpenAI
Related Issues
- fixes #7905
Proposed Changes:
pass the meta
entry from openai api to the result for all the LLM-based evaluators, containing the following fields:
- model
- prompt tokens
- answer tokens
- total tokens
How did you test it?
manual veritifcation + run local tests + end2endtests
Checklist
- I have read the contributors guidelines and the code of conduct
- I have updated the related issue with new insights and changes
- I added unit tests and updated the docstrings
- I've used one of the conventional commit types for my PR title:
fix:
,feat:
,build:
,chore:
,ci:
,docs:
,style:
,refactor:
,perf:
,test:
. - I documented my code
- I ran pre-commit hooks and fixed any issue