PandaGPT icon indicating copy to clipboard operation
PandaGPT copied to clipboard

Some weights of LlamaForCausalLM were not initialized from the model checkpoint at ../pretrained_ckpt/vicuna_ckpt/7b_v0/ and are newly initialized

Open Junnian opened this issue 1 year ago • 1 comments

!] load base configuration: config/base.yaml [!] load configuration from config/openllama_peft.yaml /root/anaconda3/envs/py310/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead. warnings.warn( /root/anaconda3/envs/py310/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead. warnings.warn( [!] load base configuration: config/base.yaml [!] load configuration from config/openllama_peft.yaml [2023-11-19 01:50:01,333] [INFO] [comm.py:622:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [!] collect 161151 samples for training Initializing visual encoder from ../pretrained_ckpt/imagebind_ckpt/ ... [!] collect 161151 samples for training Initializing visual encoder from ../pretrained_ckpt/imagebind_ckpt/ ... Visual encoder initialized. Initializing language decoder from ../pretrained_ckpt/vicuna_ckpt/7b_v0/ ... Visual encoder initialized. Initializing language decoder from ../pretrained_ckpt/vicuna_ckpt/7b_v0/ ... Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.04s/it] Some weights of LlamaForCausalLM were not initialized from the model checkpoint at ../pretrained_ckpt/vicuna_ckpt/7b_v0/ and are newly initialized: ['model.layers.26.self_attn.rotary_emb.inv_freq', 'model.layers.22.self_attn.rotary_emb.inv_freq', 'model.layers.19.self_attn.rotary_emb.inv_freq', 'model.layers.18.self_attn.rotary_emb.inv_freq', 'model.layers.25.self_attn.rotary_emb.inv_freq', 'model.layers.27.self_attn.rotary_emb.inv_freq', 'model.layers.12.self_attn.rotary_emb.inv_freq', 'model.layers.0.self_attn.rotary_emb.inv_freq', 'model.layers.6.self_attn.rotary_emb.inv_freq', 'model.layers.31.self_attn.rotary_emb.inv_freq', 'model.layers.2.self_attn.rotary_emb.inv_freq', 'model.layers.9.self_attn.rotary_emb.inv_freq', 'model.layers.29.self_attn.rotary_emb.inv_freq', 'model.layers.21.self_attn.rotary_emb.inv_freq', 'model.layers.8.self_attn.rotary_emb.inv_freq', 'model.layers.24.self_attn.rotary_emb.inv_freq', 'model.layers.14.self_attn.rotary_emb.inv_freq', 'model.layers.23.self_attn.rotary_emb.inv_freq', 'model.layers.28.self_attn.rotary_emb.inv_freq', 'model.layers.15.self_attn.rotary_emb.inv_freq', 'model.layers.17.self_attn.rotary_emb.inv_freq', 'model.layers.7.self_attn.rotary_emb.inv_freq', 'model.layers.1.self_attn.rotary_emb.inv_freq', 'model.layers.4.self_attn.rotary_emb.inv_freq', 'model.layers.11.self_attn.rotary_emb.inv_freq', 'model.layers.5.self_attn.rotary_emb.inv_freq', 'model.layers.10.self_attn.rotary_emb.inv_freq', 'model.layers.3.self_attn.rotary_emb.inv_freq', 'model.layers.16.self_attn.rotary_emb.inv_freq', 'model.layers.30.self_attn.rotary_emb.inv_freq', 'model.layers.13.self_attn.rotary_emb.inv_freq', 'model.layers.20.self_attn.rotary_emb.inv_freq'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Junnian avatar Nov 19 '23 02:11 Junnian