phoenix icon indicating copy to clipboard operation
phoenix copied to clipboard

🗺️ Human / Programatic Feedback

Open mikeldking opened this issue 1 year ago • 3 comments

Add the ability to add human feedback via an API call

In many applications, but even more so for LLM applications, it is important to collect user feedback to understand how your application is performing in real-world scenarios. The ability to observe user feedback along with trace data can be very powerful to drill down into the most interesting datapoints, then send those datapoints for further review, automatic evaluation, or even datasets.

Phoenix should make it easy to attach user feedback to traces. It's often helpful to expose a simple mechanism (such as a thumbs-up, thumbs-down button) to collect user feedback for your application responses. The phoenix SDK or API should support sending feedback.

Use-Cases

  • Creating an LLM judge to align with human feedback on a dataset

Milestone 1

  • As a user of the phoenix UI I can add / edit / view / delete human annotations
  • As a user of phoenix I can use annotations to drive dataset curation
  • As a developer I can send annotations on traces and spans

client

  • [x] https://github.com/Arize-ai/phoenix/issues/2518
  • [x] [client] add streaming single log endpoint for human feedback

REST

  • [x] #3826
  • [x] #3827
  • [x] #3828

GraphQL

  • [x] #3769
  • [x] #3770
  • [x] #3824
  • [x] #3815
  • [x] #3830
  • [x] #3829
  • [x] [gql][annotations] resolve trace annotations off of span
  • [x] #3913
  • [x] #3962
  • [x] #3963
  • [x] #3992
  • [x] #4028

Datasets

  • [x] #3912

UI

  • [x] #3894
  • [x] #3825
  • [ ] #3831
  • [x] #3995
  • [x] #3996
  • [x] #3997
  • [x] #3998
  • [x] #3999
  • [x] #4027
  • [x] #4000
  • [x] #4001
  • [x] #4029
  • [ ] #4030
  • [x] #4074
  • [x] [annotations][UI] show annotations under the span details, make refetchable
  • [ ] #4073
  • [x] #4076
  • [x] #4075
  • [x] [annotations][ui] column selector for annotations
  • [x] [annotations][ui] annotation summaries in the header
  • [x] #4077
  • [x] #4078
  • [x] #4142
  • [x] #4145

Documentation

  • [x] #3892
  • [x] #4134
  • [x] #4140
  • [ ] #4156
  • [ ] #4157
  • [ ] #4155
  • [ ] #4158
  • [ ] #4197

Client

  • [ ] #4178

Cleanup

  • [x] #4180

Readings

  • Human Feedback for RLHF (https://arxiv.org/pdf/2404.13895)

mikeldking avatar Mar 08 '24 18:03 mikeldking

It seems like you've got this under control, if you want help or have specific questions, let me know what I can do for you!

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Mar 08 '24 18:03 dosubot[bot]

@mikeldking besides human thumbs up/down, there is also the other cases like "normal" ragas usage, where you (re)run same set of questions with grounds thruths as a baseline, and then want to add the evaluation scores to the trace.

the main issue is that there is no way that eg openinference callback communicates the span_id or trace_id back to the framework client in anyway (in our case llama-index). it guess the framework should provide a hook where eg the callback can set the id, and then somehow client code can retrieve it. i can guess the id based on the response text and the timing, but that ivolves getting dataframes back from phoenix, inspecting them with sole purpose to find the correct id.

once the id is found, afaik, i only have to construct a single row evaluation dataframe and then do a log_evaluations

stdweird avatar Mar 09 '24 16:03 stdweird

@stdweird yes exactly. We can definitely make an ID be available in the application so that subsequent feedback and evaluations can be programmatically logged to the phoenix server.

From a roadmap perspective we do need to tackle two key things - #2340 and persistence so that we can make phoenix scale forward. We will be unblocked to work on this after.

Thanks for your insight. appreciate it!

mikeldking avatar Mar 09 '24 21:03 mikeldking

Closing for now as MVP is complete

mikeldking avatar Aug 29 '24 19:08 mikeldking