LLaVA icon indicating copy to clipboard operation
LLaVA copied to clipboard

[Usage] ImportError: cannot import name 'ShardedDDPOption' from 'transformers.trainer'

Open caopulan opened this issue 1 year ago • 6 comments

Describe the issue

transformers no longer has SharedDDPOption after v4.35.0

caopulan avatar Feb 01 '24 03:02 caopulan

Same issue, looking for fix. Following the pip install instructions in README, I have transformers==4.36.2 installed, which is the version pinned by pyproject.toml

How does everyone else manage to run this?

attnmamba avatar Feb 01 '24 03:02 attnmamba

Hi @caopulan @attnmamba

Sorry for the confusion. It should now be fixed in the main branch. Please let me know if it works for you, thanks.

haotian-liu avatar Feb 01 '24 04:02 haotian-liu

@haotian-liu Thank you! The new commit solved the above issue, but there are warning message is constantly being printed out during finetuning:

First warning (printed once):

/home/attnmamba/anaconda3/envs/llava/lib/python3.10/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None

Repeated warnings (continuously printed):

WARNING: tokenization mismatch: 1 vs. 57. (ignored)

My installed packages are

flash-attn                2.5.2
peft                      0.4.0
tokenizers                0.15.0
torch                     2.0.1
torchvision               0.15.2
transformers              4.36.2

Appears to be a previous issue you have addressed before https://github.com/haotian-liu/LLaVA/issues/661#issuecomment-1779692868

I tried pip install "tokenizers>=0.12.1,<0.14" and this installed 0.13.3, but this gives another error when restarting the finetuning

Traceback (most recent call last):
  File "/home/attnmamba/llava/train/train_mem.py", line 7, in <module>
    from llava.train.llama_flash_attn_monkey_patch import replace_llama_attn_with_flash_attn
  File "/home/attnmamba/llava/__init__.py", line 1, in <module>
    from .model import LlavaLlamaForCausalLM
ImportError: cannot import name 'LlavaLlamaForCausalLM' from 'llava.model' (/home/attnmamba/llava/model/__init__.py)

attnmamba avatar Feb 01 '24 06:02 attnmamba

@attnmamba

Would you mind sharing your command? I cannot reproduce the issue on my side. Thanks.

haotian-liu avatar Feb 01 '24 06:02 haotian-liu

@haotian-liu Of course, here is my bash script

#!/bin/bash

deepspeed llava/train/train_mem.py \
    --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \
    --bits 4 \
    --deepspeed ./scripts/zero2.json \
    --model_name_or_path liuhaotian/llava-v1.5-13b \
    --version v1 \
    --data_path ./playground/data/my_instruct_82k.json \
    --image_folder ./playground/data/my_images \
    --vision_tower openai/clip-vit-large-patch14-336 \
    --mm_projector_type mlp2x_gelu \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end False \
    --mm_use_im_patch_token False \
    --image_aspect_ratio pad \
    --group_by_modality_length True \
    --bf16 True \
    --output_dir ./checkpoints/llava-v1.5-13b-task-lora \
    --num_train_epochs 1 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 50000 \
    --save_total_limit 1 \
    --learning_rate 2e-4 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 4 \
    --lazy_preprocess True \
    --report_to wandb

Please let me know if theres more info I can provide

attnmamba avatar Feb 01 '24 06:02 attnmamba

@haotian-liu Sorry false alarm, had some bad training examples in my dataset. No more tokenization mismatch warnings once those were removed.

Thank you very much for your help!

attnmamba avatar Feb 01 '24 07:02 attnmamba

Nice, closing this issue :)

haotian-liu avatar Feb 02 '24 08:02 haotian-liu

Describe the issue

transformers no longer has SharedDDPOption after v4

Describe the issue

transformers no longer has SharedDDPOption after v4.35.0

How to fix it, please tell me 0.0

ChenRan2000 avatar Apr 09 '24 09:04 ChenRan2000

Describe the issue

transformers no longer has SharedDDPOption after v4

Describe the issue

transformers no longer has SharedDDPOption after v4.35.0

How to fix it, please tell me 0.0

Just disable import ShardedDDPOption, which is not used.

Liu0329 avatar Apr 12 '24 03:04 Liu0329