self-refine
self-refine copied to clipboard
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
Would you please provide the instructions for evaluating model responses in the Dialogue Response Generation task?
I want to use self-refine for reasoning task, such as open-book qa for example. For the few-shot examples for the initial generation. Does the examples have to be bad examples?...
There is never a feedback
Hi, Thank you for your fantastic work! It seems like the instructions for conducting PIE evaluation are absent. Would you be able to provide instructions on how to use the...
Was there an attempt to test this library with LLaMa 2 model
Hello authors, similar to previous post: https://github.com/madaan/self-refine/issues/6, when I run the tasks, I also face severe hallucation issues when using GPT-3.5-turbo-0125. error message like this: > An error occurred: list...
hello. May I ask how to run the full test cases to get the figures reported in the paper ? For example, in acronym task, I only see the unit...
In https://selfrefine.info/ I found the following pseudocode --------------------------- ``` def self_refine(prompt: str) -> str: def is_refinement_sufficient(prompt, feedback, initial, refined) -> bool: # Define stopping criteria here pass answer = ChatGPT(prompt)...