SmartEdit icon indicating copy to clipboard operation
SmartEdit copied to clipboard

there are lots of bugs in TrainStage1

Open 0xLDF opened this issue 1 year ago • 4 comments

Thank you for your excellent work, but the open-source code indeed has many minor issues, which makes others hesitant to follow your work. During the TrainStage1 phase, the issues are as follows:

  1. code torchrun --nproc_per_node=8 --master_port=20001 fastchat/train/TrainStage1.py fastchat directory seemingly doesn't exist,it should be train/TrainStage1.py.
  2. code load_LLaVA_ckpt_v1_1 should be load_LLaVA_ckpt_v1_1_7b.
  3. code SD_QFormer_conversation_33tokens ckpt doesn't have mm_projector module, which didn't used in train stage 1.

Could you provide Trainstage1 result checkpoint.

0xLDF avatar Sep 27 '24 09:09 0xLDF

Thanks for your interest in our work. There might be some small code typos when we push on github, while you could simply fix them for further usage.

yuzhou914 avatar Sep 28 '24 05:09 yuzhou914

Hello, I am confused about the inconsistencies between the first training stage and the MLLMSD training stage:

  • In the first training stage, the LLama checkpoint is loaded, and 33 new tokens are added (<img>,<img_0>,...,<img_31>), with only the llm_head weight and embed_token weight corresponding to the new tokens being trained.
  • In the MLLMSD training stage, the LLava checkpoint is loaded, and 35 new tokens are added (<img>,<img_start>,<img_end>,<img_0>,...<img_31>).

This discrepancy in the number of new tokens causes the MLLMSD model's load_pretrain_MLLM_alignment function to fail.

In the first training stage, the LLama checkpoint is loaded, but in the MLLMSD training stage, the LLava checkpoint is loaded, which is puzzling. Why not directly align LLava with CLIP?"

0xLDF avatar Sep 30 '24 10:09 0xLDF

@Bilibilee Hi, I'm encountering the same issue regarding the token inconsistency between training stages. Could you share how you resolved this, specifically:

  1. How did you handle the token mismatch (33 vs 35 tokens) when loading the checkpoint?
  2. What modifications were needed in the load_pretrain_MLLM_alignment function?

Any insights would be greatly appreciated.

XuwuChen443 avatar Jan 16 '25 06:01 XuwuChen443

Have you solved the problem, I also countered the same tensor mismatch problem 33 tokens vs 35 tokens!

baihuple avatar Jun 06 '25 16:06 baihuple