dspy icon indicating copy to clipboard operation
dspy copied to clipboard

Add logprobs hfclientvllm

Open tom-doerr opened this issue 1 year ago • 7 comments

This is part of https://github.com/stanfordnlp/dspy/issues/879. I tested it and it behaves the same as the other PR.

lm = dspy.HFClientVLLM(model="NurtureAI/Meta-Llama-3-8B-Instruct-32k", port=38242, url="http://localhost", max_tokens=4)
test_text = "This is a test article."
output_normal = lm(test_text)
print("output_normal:", output_normal)

output_with_logprobs = lm(test_text, logprobs=2)
print("output_with_logprobs:", output_with_logprobs)
output_normal: [' It is a test']
output_with_logprobs: [{'text': ' It is a test', 'logprobs': {'text_offset': [0, 3, 6, 8], 'token_logprobs': [-1.7945036888122559, -0.6691504716873169, -1.303508996963501, -0.7093929052352905], 'tokens': [' It', ' is', ' a', ' test'], 'top_logprobs': [{' It': -1.7945036888122559, ' This': -1.7945036888122559}, {' is': -0.6691504716873169, ' will': -2.0441503524780273}, {' a': -1.303508996963501, ' not': -1.803508996963501}, {' test': -0.7093929052352905, ' sample': -3.20939302444458}]}}]

tom-doerr avatar May 18 '24 16:05 tom-doerr

Hmm this seems fine to me but it's ad-hoc.... it would be specific to this one client

okhat avatar May 19 '24 23:05 okhat

It also works already for the OpenAI API: https://github.com/stanfordnlp/dspy/pull/999. This feature wouldn't need to be documented until more clients are supported. In my initial testing the logprob feedback was much better than binary feedback. In my case non of the dataset examples passed and BootstrapFewShot gave up. With logprob feedback it was able to consistently improve over multiple iterations.

tom-doerr avatar May 19 '24 23:05 tom-doerr

does it block you if we keep this open for a week or two? i need to think more about logprobs

okhat avatar May 19 '24 23:05 okhat

No doesn't block me at all, I'm using my custom branch anyway. If you are thinking about logprobs anyway, my personal wish list would be:

  • Allows for access to prompt logprobs and not just completion logprobs
  • Allows access to unnormalized logprobs. This can be especially useful when using logprobs to sort something

I know that not all backends can give us all that, users would have to choose a backend that supports their usecase.

tom-doerr avatar May 20 '24 00:05 tom-doerr

@tom-doerr just wanted to add some logprobs-related issues here.

With the past PR merged for OpenAI logprobs support, I realize that this only outputs the logprobs for "direct" LM calls, and is not compatible with configuring the OpenAI LM within a DSPy program.

For instance, the example provided outputs logprobs as intended:

lm = dspy.OpenAI(model='gpt-3.5-turbo-instruct', max_tokens=6, api_key=config['openai']['secret_key'])
test_text = "This is a test article."
test_output = lm(test_text, logprobs=1)

but if we had some DSPy program and did dspy.settings.configure(lm=lm, ...) to have it run with that LM, the DSPy Completions logic does not output the logprobs in the response. This signals that we potentially we need a greater refactor to integrate logprobs correctly. I might open a PR soon with some baseline code that handles this per client, but definitely good to think in the direction of non-client-specific functionality. @okhat

arnavsinghvi11 avatar May 31 '24 04:05 arnavsinghvi11

@arnavsinghvi11 Yes I know what you mean, I planned to add support to DSPy components after support for the clients is merged. Since it's a bigger refactor I would have suggested adding support for it bit by bit. Is this something I could help out with? If so it might make sense to have a call

tom-doerr avatar May 31 '24 22:05 tom-doerr

Sounds good! Yes would love to connect @tom-doerr - truly appreciate the enthusiasm you've shown for DSPy :))

arnavsinghvi11 avatar Jun 15 '24 19:06 arnavsinghvi11