haystack
haystack copied to clipboard
LLM-based Evaluators should return `meta` information provided by OpenAIGenerator
Is your feature request related to a problem? Please describe.
To calculate the cost of the evaluation, we need a token count. Most generator components in Haystack provide that information in the meta
field.
Describe the solution you'd like LLM-based evaluators should return token count information with the result. This would be handy for connecting evaluation pipelines to monitoring tools such as Langfuse
Describe alternatives you've considered Leave it as it is
Additional context N/A