automated-interpretability icon indicating copy to clipboard operation
automated-interpretability copied to clipboard

About the 'logprobs' in the response object

Open HamLaertes opened this issue 2 years ago • 2 comments

Hi,

I find that the simulator will postprocess the response of 'text-davinci-003' using the response field 'logprobs'.

However, as I read the document of openai, the field 'logprobs' is going to be deprecated since the completion response object will be replaced by chat completion object, and also the model 'text-davince-003' is going to be deprecated.

I now have the access to the gpt-4 and gpt-3.5-turbo with chat completion response object. Is there any way to conduct the neuron-explainer using the two models? i.e. without the field 'logprobs'.

Or, is it necessary to call 'text-davince-003' and other models with completion response object that return 'logprobs'.

Thanks a lot!

HamLaertes avatar Nov 02 '23 03:11 HamLaertes

Explanation will work without logpobs but our current scoring method won't, we will update if we have a better scoring method

On Wed, Nov 1, 2023 at 8:56 PM ‎Youcheng Huang @.***> wrote:

Hi,

I find that the simulator will postprocess the response of 'text-davinci-003' using the response field 'logprobs'.

However, as I read the document of openai, the field 'logprobs' is going to be deprecated since the completion response object will be replaced by chat completion object, and also the model 'text-davince-003' is going to be deprecated.

I now have the access to the gpt-4 and gpt-3.5-turbo with chat completion response object. Is there any way to conduct the neuron-explainer using the two models? i.e. without the field 'logprobs'.

Or, is it necessary to call 'text-davince-003' and other models with completion response object that return 'logprobs'.

Thanks a lot!

— Reply to this email directly, view it on GitHub https://github.com/openai/automated-interpretability/issues/24, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASWBKK7HMDWB3WNQBFDMQWDYCMKWPAVCNFSM6AAAAAA62HZWASVHI2DSMVQWIX3LMV43ASLTON2WKOZRHE3TGNBVHAYDANI . You are receiving this because you are subscribed to this thread.Message ID: @.***>

williamrs-openai avatar Nov 02 '23 17:11 williamrs-openai

Hi @williamrs-openai , could you explain how the variables in L209 to L212 correspond to the logprobs results given in gpt4-1106? Especially choice["logprobs"]["text_offset"] made me confused.

Here I set logprobs=True and top_logprobs=1. choice.logprobs.content include: image

By the way, how many should I set top_logprobs to?

msra-jqxu avatar Sep 13 '24 10:09 msra-jqxu