LLaVA icon indicating copy to clipboard operation
LLaVA copied to clipboard

LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported.

Open YangQiuEric opened this issue 6 months ago • 6 comments

Describe the issue

Issue:

deepspeed llava/train/train_mem.py \
    --lora_enable True --lora_r 128 --lora_alpha 256  \
    --deepspeed ./scripts/zero3_offload.json \
    --model_name_or_path liuhaotian/llava-v1.5-13b \
    --version v1 \
    --data_path /Users/eric/Documents/llava/patient_data.json \
    --image_folder /home/eric/LLaVA-Med/data_file_jpg \
    --vision_tower openai/clip-vit-large-patch14-336 \
    --mm_projector_type mlp2x_gelu \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end False \
    --mm_use_im_patch_token False \
    --image_aspect_ratio pad \
    --group_by_modality_length True \
    --bf16 True \
    --output_dir ./checkpoints/llava-v1.5-13b-task-lora \
    --num_train_epochs 1 \
    --per_device_train_batch_size 16 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 50000 \
    --save_total_limit 1 \
    --learning_rate 2e-4 \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 4 \
    --lazy_preprocess True \
    --report_to wandb

Log:

Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:28<00:00,  9.54s/it]
Adding LoRA adapters...
Traceback (most recent call last):
  File "/home/eric/LLaVA/llava/train/train_mem.py", line 13, in <module>
    train()
  File "/home/eric/anaconda3/envs/llava-med/lib/python3.10/site-packages/llava/train/train.py", line 837, in train
    model = get_peft_model(model, lora_config)
  File "/home/eric/anaconda3/envs/llava-med/lib/python3.10/site-packages/peft/mapping.py", line 98, in get_peft_model
    return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name)
  File "/home/eric/anaconda3/envs/llava-med/lib/python3.10/site-packages/peft/peft_model.py", line 893, in __init__
    super().__init__(model, peft_config, adapter_name)
  File "/home/eric/anaconda3/envs/llava-med/lib/python3.10/site-packages/peft/peft_model.py", line 112, in __init__
    self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type](
  File "/home/eric/anaconda3/envs/llava-med/lib/python3.10/site-packages/peft/tuners/lora.py", line 180, in __init__
    self.add_adapter(adapter_name, self.peft_config[adapter_name])
  File "/home/eric/anaconda3/envs/llava-med/lib/python3.10/site-packages/peft/tuners/lora.py", line 194, in add_adapter
    self._find_and_replace(adapter_name)
  File "/home/eric/anaconda3/envs/llava-med/lib/python3.10/site-packages/peft/tuners/lora.py", line 352, in _find_and_replace
    new_module = self._create_new_module(lora_config, adapter_name, target)
  File "/home/eric/anaconda3/envs/llava-med/lib/python3.10/site-packages/peft/tuners/lora.py", line 305, in _create_new_module
    raise ValueError(
ValueError: Target module LlamaDecoderLayer(
  (self_attn): LlamaAttention(
    (q_proj): Linear(in_features=5120, out_features=5120, bias=False)
    (k_proj): Linear(in_features=5120, out_features=5120, bias=False)
    (v_proj): Linear(in_features=5120, out_features=5120, bias=False)
    (o_proj): Linear(in_features=5120, out_features=5120, bias=False)
    (rotary_emb): LlamaRotaryEmbedding()
  )
  (mlp): LlamaMLP(
    (gate_proj): Linear(in_features=5120, out_features=13824, bias=False)
    (up_proj): Linear(in_features=5120, out_features=13824, bias=False)
    (down_proj): Linear(in_features=13824, out_features=5120, bias=False)
    (act_fn): SiLUActivation()
  )
  (input_layernorm): LlamaRMSNorm()
  (post_attention_layernorm): LlamaRMSNorm()
) is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported.
[2023-12-10 02:44:54,053] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 2675989
[2023-12-10 02:44:54,053] [ERROR] [launch.py:321:sigkill_handler] ['/home/eric/anaconda3/envs/llava-med/bin/python', '-u', 'llava/train/train_mem.py', '--local_rank=0', '--lora_enable', 'True', '--lora_r', '128', '--lora_alpha', '256', '--deepspeed', './scripts/zero3_offload.json', '--model_name_or_path', 'liuhaotian/llava-v1.5-13b', '--version', 'v1', '--data_path', '/Users/eric/Documents/llava/patient_data.json', '--image_folder', '/home/eric/LLaVA-Med/data_file_jpg', '--vision_tower', 'openai/clip-vit-large-patch14-336', '--mm_projector_type', 'mlp2x_gelu', '--mm_vision_select_layer', '-2', '--mm_use_im_start_end', 'False', '--mm_use_im_patch_token', 'False', '--image_aspect_ratio', 'pad', '--group_by_modality_length', 'True', '--bf16', 'True', '--output_dir', './checkpoints/llava-v1.5-13b-task-lora', '--num_train_epochs', '1', '--per_device_train_batch_size', '16', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '4', '--evaluation_strategy', 'no', '--save_strategy', 'steps', '--save_steps', '50000', '--save_total_limit', '1', '--learning_rate', '2e-4', '--warmup_ratio', '0.03', '--lr_scheduler_type', 'cosine', '--logging_steps', '1', '--tf32', 'True', '--model_max_length', '2048', '--gradient_checkpointing', 'True', '--dataloader_num_workers', '4', '--lazy_preprocess', 'True', '--report_to', 'wandb'] exits with return code = 1

Screenshots: You may attach screenshots if it better explains the issue.

YangQiuEric avatar Dec 10 '23 10:12 YangQiuEric

Same issue! related issue in https://github.com/oobabooga/text-generation-webui/issues/2297 Doesn't work for me!

clima-ai avatar Dec 13 '23 17:12 clima-ai

i add some nn.linear in LlavaLlamaForCausalLM, it leads this error. so i modify this func in train.py multimodal_keywords = ['mm_projector', 'vision_tower', 'vision_resampler', 'my_linear_layer'], my problem is sloved.

def find_all_linear_names(model):
    cls = torch.nn.Linear
    lora_module_names = set()
    multimodal_keywords = ['mm_projector', 'vision_tower', 'vision_resampler', 'my_linear_layer']
    for name, module in model.named_modules():
        if any(mm_keyword in name for mm_keyword in multimodal_keywords):
            continue
        if isinstance(module, cls):
            names = name.split('.')
            lora_module_names.add(names[0] if len(names) == 1 else names[-1])

    if 'lm_head' in lora_module_names: # needed for 16-bit
        lora_module_names.remove('lm_head')
    return list(lora_module_names)

uyo9ko avatar Dec 14 '23 05:12 uyo9ko

Not working for me. Was your error exactly this one? (I realized that checkpoint shards didn't load)

Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]^MLoading checkpoint shards: 50%|█████ | 1/2 [00:17<00:17, 17.33s/it]^MLoading checkpoint shards: 100%|██████████| 2/2 [00:23<00:00, 10.67s/it]^MLoading checkpoint shards: 100%|██████████| 2/2 [00:23<00:00, 11.67s/it] Traceback (most recent call last): File "/home/carlos.limasantos/LLaVA/llava/train/train_mem.py", line 14, in train() File "/home/carlos.limasantos/LLaVA/llava/train/train.py", line 825, in train model = get_peft_model(model, lora_config) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/mapping.py", line 98, in get_peft_model return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/peft_model.py", line 893, in init super().init(model, peft_config, adapter_name) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/peft_model.py", line 112, in init self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type]( File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/tuners/lora.py", line 180, in init self.add_adapter(adapter_name, self.peft_config[adapter_name]) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/tuners/lora.py", line 194, in add_adapter self._find_and_replace(adapter_name) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/tuners/lora.py", line 352, in _find_and_replace new_module = self._create_new_module(lora_config, adapter_name, target) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/tuners/lora.py", line 305, in _create_new_module raise ValueError( ValueError: Target module LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) is not supported. Currently, only torch.nn.Linear and Conv1D are supported.

clima-ai avatar Dec 14 '23 18:12 clima-ai

Not working for me. Was your error exactly this one? (I realized that checkpoint shards didn't load)

Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]^MLoading checkpoint shards: 50%|█████ | 1/2 [00:17<00:17, 17.33s/it]^MLoading checkpoint shards: 100%|██████████| 2/2 [00:23<00:00, 10.67s/it]^MLoading checkpoint shards: 100%|██████████| 2/2 [00:23<00:00, 11.67s/it] Traceback (most recent call last): File "/home/carlos.limasantos/LLaVA/llava/train/train_mem.py", line 14, in train() File "/home/carlos.limasantos/LLaVA/llava/train/train.py", line 825, in train model = get_peft_model(model, lora_config) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/mapping.py", line 98, in get_peft_model return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/peft_model.py", line 893, in init super().init(model, peft_config, adapter_name) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/peft_model.py", line 112, in init self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type]( File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/tuners/lora.py", line 180, in init self.add_adapter(adapter_name, self.peft_config[adapter_name]) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/tuners/lora.py", line 194, in add_adapter self._find_and_replace(adapter_name) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/tuners/lora.py", line 352, in _find_and_replace new_module = self._create_new_module(lora_config, adapter_name, target) File "/home/carlos.limasantos/anaconda3/envs/llava/lib/python3.10/site-packages/peft/tuners/lora.py", line 305, in _create_new_module raise ValueError( ValueError: Target module LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) is not supported. Currently, only torch.nn.Linear and Conv1D are supported.

I have the same issue. Did you resolve it? Thanks in advance for answer.

dashascience avatar Feb 01 '24 20:02 dashascience

same issue

phellonchen avatar Feb 18 '24 03:02 phellonchen