langsmith-sdk icon indicating copy to clipboard operation
langsmith-sdk copied to clipboard

Running evaluation on dataset outputs

Open chasemcdo opened this issue 11 months ago • 2 comments

Feature request

It would great to be able to generate a dataset with outputs and then perform evaluation directly on these "reference outputs".

Motivation

While building out a LangSmith evaluation pipeline, you'll likely need to do several iterations of evaluation metrics to tune them as desired. If on each iteration you also need to regenerate output examples, you end up eating up a lot of tokens on generation that is otherwise reusable.

I've seen the compute_test_metrics beta function from the cookbook's which achieves a similar result; however, its adding on top of existing tests rather than allowing you to run directly on a created/imported dataset.

Thanks!

chasemcdo avatar Mar 23 '24 15:03 chasemcdo

We are working on something in this vein, but want to make sure we satisfy your use case: could you elaborate on this a bit more?

Is the flow something like: first create a set of inputs, generate candidate outputs, manually review and revise, then continue iterating?

Or is it more a case where ground truth isn't super meaningful and you mainly want to compute relative performance to some baseline that you may update over time?

Or something different?

hinthornw avatar Mar 23 '24 19:03 hinthornw

Closest to the second one. The current work I am doing there is no ground truth. The flow at least that I imagined is something like:

  • Follow the normal dataset creation processes, but in this case the "outputs" aren't a reference / ground truth and rather the thing to be evaluated
  • Run evaluation directly on said dataset's outputs

With the primary motivation being to save money/inference time when iterating on LangSmith evaluators, since I've found myself making several tweaks to the evaluators I've setup to ensure they align with my expectations, but each evaluator iteration requires re-generation of the outputs to be evaluated which ends up costing extra and changes the outputs you may have made specific tweaks to address.

So the specific use case is having a set of inputs/outputs which I want to use to essentially tune my evaluators.

chasemcdo avatar Mar 23 '24 22:03 chasemcdo

Seems like this? https://docs.smith.langchain.com/how_to_guides/evaluation/upload_existing_experiments

hinthornw avatar Sep 06 '24 23:09 hinthornw