LLaMA-Adapter icon indicating copy to clipboard operation
LLaMA-Adapter copied to clipboard

Computing output likelihoods?

Open vishaal27 opened this issue 1 year ago • 0 comments

Hi, is it possible to get the tokenwise log-likelihood scores of different outputs from the model?

The use-case would be something like: Given an interleaved image/text input and a list of output text candidates, we should be able to get a score for each output candidate and then return their ranked list, rather than generating the outputs directly. This would be close to how LLMs are evaluated on MCQ tasks. An example from the T0 paper Page 6 (https://arxiv.org/pdf/2110.08207.pdf):

For tasks that involve choosing the correct completion from several options (e.g. multiple choice
question answering), we follow Brown et al. (2020) and use rank classification to evaluate our
model: we compute the log-likelihood of each of the target options under the fine-tuned model and
select the option with the highest log-likelihood as the prediction. For simplicity, we do not apply
length normalization to the log-likelihoods of the target options.

Is it straightforward to do this with LLaMA-Adapter-V1/V2? I assume with the model forward function at inference (haven't dug into this yet)?

vishaal27 avatar May 05 '23 15:05 vishaal27