langchain
langchain copied to clipboard
Expanding LLM output: Adding `logprob` and `finish_reason` to `llm.generate()` output
Hi langchain team,
I'd like to help add logprobs and finish_reason to the openai generation output. Would it be best to build onto the existing generate method in the BaseOpenAI class and Generation schema object? https://github.com/hwchase17/langchain/pull/293#pullrequestreview-1212330436 references adding a new method, which I believe is the generate method, and I wanted to confirm. Thanks!
i think these should already be in there? https://github.com/hwchase17/langchain/blob/b7747017d72eaeabfa65edb7eec413d3fd006ddb/langchain/llms/openai.py#L275
or did you mean something else?
Ah my bad I totally missed that!
I'm planning to use logprobs as part of some conditional chaining -- will open a PR if it looks like there's something generalizable out of it (after more carefully checking what's already in the codebase next time...)