lighteval icon indicating copy to clipboard operation
lighteval copied to clipboard

[BUG] System Prompt option now missing?

Open teknium1 opened this issue 6 months ago • 4 comments

Describe the bug

I was using an older version of LightEval packaged with OpenR1 repo, it had a --system-prompt "xxx" arg for lighteval.

That seems to be gone. I'm told that it is now a model arg, though I dont see that in the docs for vllm, but also if any comma is in the system prompt, it will break everything.

To Reproduce

use

        lighteval vllm "$MODEL_ARGS" "$benchmark_task" \
            --system-prompt "$system_prompt" \
            --use-chat-template \
            --output-dir "$output_dir" 2>&1 | tee -a "$log_file"

teknium1 avatar Jun 27 '25 22:06 teknium1

mmm it seems this also no longer works?

Usage: lighteval vllm [OPTIONS] MODEL_ARGS TASKS Try 'lighteval vllm --help' for help. ╭─ Error ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ No such option: --use-chat-template

teknium1 avatar Jun 27 '25 23:06 teknium1

Yes the system prompt now moved to the model's arg to facilitate the logging and make the codebase clearer. I was using yaml config files for model and did not think of that edge case, thanks for the issue I will think of of fix asap ! In the meantime, using a yaml config file will work., also for the doc you need to check for the main branhc has we haven't done a release yet

https://huggingface.co/docs/lighteval/main/en/package_reference/models

NathanHB avatar Jun 30 '25 08:06 NathanHB

I can't seem to get it to work even when using yaml config file. Here's my yaml file:

model_parameters: model_name: "Qwen/Qwen2.5-3B" dtype: "bfloat16" data_parallel_size: 4 gpu_memory_utilization: 0.8 max_model_length: 32768 use_chat_template: True system_prompt: 'You are a helpful AI Assistant, designed to provided well-reasoned and detailed responses. You FIRST think about the reasoning process step by step and then provide the user with the answer. Please enclose your final answer in the box: \boxed{Your Answer}. Please stop generation immediately after outputing the box.' add_special_tokens: True generation_parameters: temperature: 0.9 top_k: 50 min_p: 0.0 top_p: 1.0 max_new_tokens: 3076

Error: ValidationError: 1 validation error for VLLMModelConfig system_prompt Extra inputs are not permitted [type=extra_forbidden, input_value='You are a helpful AI Ass...fter outputing the box.', input_type=str]

ghimiremukesh avatar Aug 05 '25 06:08 ghimiremukesh

are you using the latest main commit ?

NathanHB avatar Aug 12 '25 13:08 NathanHB