lm-evaluation-harness icon indicating copy to clipboard operation
lm-evaluation-harness copied to clipboard

lm_eval on squadv2 and meta-llama/Meta-Llama-3.1-8B fails with TypeError: Instance.__init__() got an unexpected keyword argument 'apply_chat_template'

Open danielkorzekwa opened this issue 11 months ago • 6 comments
trafficstars

lm_eval --model hf --model_args pretrained=meta-llama/Meta-Llama-3.1-8B --tasks squadv2 --batch_size 8

causes

File "/workspace/lm-evaluation-harness/lm_eval/tasks/squadv2/task.py", line 121, in construct_requests
   Instance(
TypeError: Instance.__init__() got an unexpected keyword argument 'apply_chat_template'

danielkorzekwa avatar Dec 04 '24 21:12 danielkorzekwa

It is caused by latest MR.

Instance does not expect apply_chat_template but it is provided here

vigneshwaran avatar Dec 17 '24 10:12 vigneshwaran

Apparently this is still an open issue, I cannot run squadv2 as well.

MaxiBoether avatar Jan 17 '25 12:01 MaxiBoether

Any updates on this?

hieuchi911 avatar Jan 17 '25 19:01 hieuchi911

What I did was to solve this is to pop apply_chat_template out of kwargs before passing it to Instance: apply_chat_template = kwargs.pop("apply_chat_template", False) inside of the construct_requests method in /workspace/lm-evaluation-harness/lm_eval/tasks/squadv2/task.py

hieuchi911 avatar Jan 17 '25 23:01 hieuchi911

can you share your final construct_requests method? I did what you mentioned above but still get the same error. TIA!

Sanchit-404 avatar Feb 14 '25 02:02 Sanchit-404

I use this to eval Qwen2.5 and encountered this error lm_eval/evaluator.py at simple_evaluate, before Line 196, between ############

https://github.com/EleutherAI/lm-evaluation-harness/blob/29971faaede3e0459eef6d1b4ec0f58460bed510/lm_eval/evaluator.py#L196

    if isinstance(model, str):
        if model_args is None:
            eval_logger.warning("model_args not specified. Using defaults.")
            model_args = ""

        if isinstance(model_args, dict):
            eval_logger.info(
                f"Initializing {model} model, with arguments: {model_args}"
            )
            
            ##############################################################
            print('here is a patching at /usr/local/lib/python3.10/site-packages/lm_eval/evaluator.py')
            print('before',model_args)
            model_args = {k: v for k, v in model_args.items() if k not in ['apply_chat_template','fewshot_as_multiturn']}
            print('after',model_args)
            # before {'pretrained': './saves/merged/mergedjgo30mjk', 'dtype': 'bfloat16', 'apply_chat_template': True, 'fewshot_as_multiturn': True, 'use_cache': True}
            # after {'pretrained': './saves/merged/mergedjgo30mjk', 'dtype': 'bfloat16', 'use_cache': True}
            ##############################################################

            lm = lm_eval.api.registry.get_model(model).create_from_arg_obj(
                model_args,
                {
                    "batch_size": batch_size,
                    "max_batch_size": max_batch_size,
                    "device": device,
                },
            )

        else:
            eval_logger.info(
                f"Initializing {model} model, with arguments: {simple_parse_args_string(model_args)}"
            )

jijivski avatar Feb 26 '25 03:02 jijivski