LongWriter icon indicating copy to clipboard operation
LongWriter copied to clipboard

how to use a local LLM to evaluate prediction quality? For example, Llama-3-70B-Instruct?

Open txchen-USTC opened this issue 1 year ago • 1 comments

Feature request / 功能建议

how to use a local LLM to evaluate prediction quality? For example, Llama-3-70B-Instruct?

Motivation / 动机

how to use a local LLM to evaluate prediction quality? For example, Llama-3-70B-Instruct?

Your contribution / 您的贡献

how to use a local LLM to evaluate prediction quality? For example, Llama-3-70B-Instruct?

txchen-USTC avatar Oct 26 '24 16:10 txchen-USTC

Hi, you can use the evaluation code in eval_quality.py to evaluate the generation quality. Substitute the API reference call get_response_gpt4 with your local LLM model call.

bys0318 avatar Oct 27 '24 09:10 bys0318