verl icon indicating copy to clipboard operation
verl copied to clipboard

Try to run qwen2vl in verl, raise KeyError: 'model.embed_tokens.weight' in qwen2vl_dtensor_weight_loader

Open Mr-Potential opened this issue 1 year ago • 3 comments

Tried to use verl to run PPO on qwen2vl-7B, an error occured when set self.config.rollout.name == 'vllm':

vllm version: v0.7.2.dev56+gbf3b79ef models: Qwen2-VL-7B-Instruct


File "/mapping-data/ragen/verl/trainer/main_ppo.py", line 257, in main_task
    trainer.fit()
File "/mapping-data/ragen/verl/trainer/ppo/ray_trainer.py", line 691, in fit
    final_gen_batch_output = generation_manager.run_llm_loop(
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mapping-data/ragen/ragen/llm_agent/generation.py", line 438, in run_llm_loop
    gen_output = self._generate_with_gpu_padding(rollings_active)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mapping-data/ragen/ragen/llm_agent/generation.py", line 152, in _generate_with_gpu_padding
    return self.actor_rollout_wg.generate_sequences(active_batch)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mapping-data/ragen/verl/workers/fsdp_workers.py", line 493, in generate_sequences
    with self.rollout_sharding_manager:
File "/mapping-data/ragen/verl/workers/sharding_manager/fsdp_vllm.py", line 84, in __enter__
    load_dtensor_weights(
File "/mapping-data/ragen/verl/third_party/vllm/vllm_spmd/dtensor_weight_loaders.py", line 365, in load_dtensor_weights
    weight_loader(actor_weights, vllm_model)
File "/mapping-data/ragen/verl/third_party/vllm/vllm_spmd/dtensor_weight_loaders.py", line 219, in qwen2vl_dtensor_weight_loader
    param = params_dict[name]
            ~~~~~~~~~~~^^^^^^

KeyError: 'model.embed_tokens.weight'

Any suggestions on how to solve this problem? many thanks!

Mr-Potential avatar Feb 20 '25 05:02 Mr-Potential

    for name, loaded_weight in actor_weights.items():
        if "rotary_emb.inv_freq" in name:
            continue
        if vllm_model.config.tie_word_embeddings and "lm_head.weight" in name:
            continue
        if name.startswith('model.'):
            name = 'language_model.' + name
......

try adding the last two lines in the outer loop

g-h-chen avatar Feb 21 '25 02:02 g-h-chen

Thank you for the reminder! It does help! In addition to the aforementioned name discrepancies, 'lm_head.weight' also needs to be revised. The complete revisions are as follows, for those who may need them:

   for name, loaded_weight in actor_weights.items():
        if "rotary_emb.inv_freq" in name:
            continue
        if vllm_model.config.tie_word_embeddings and "lm_head.weight" in name:
            continue
        if name.startswith('model.') or 'lm_head.weight' in name:
            name = 'language_model.' + name
......

Mr-Potential avatar Feb 27 '25 02:02 Mr-Potential

@ZSL98 Hi, may I ask if this part could be included in future updates? Thanks!

Mr-Potential avatar Feb 27 '25 02:02 Mr-Potential