LLMs-Finetuning-Safety
LLMs-Finetuning-Safety copied to clipboard
temp not zero during inference
Thanks for your great work! The paper said the temperature and top_p were set to 0 during inference, but the code here shows the temp is set to 1. Perhaps top_p = 0 is already greedy decoding? https://github.com/LLM-Tuning-Safety/LLMs-Finetuning-Safety/blob/8a3b38f11be1c3829e2b0ed379d3661ebc84e7db/llama2/safety_evaluation/question_inference.py#L47
Hi, thanks for pointing out this. I believe you are right --- by setting top_p = 0, it is already greedy.
Thanks!