LLaVA
LLaVA copied to clipboard
Error running with --num-gpus 2
I'm trying to run the LLaVA on two RTX 4090 GPUs for inference. The model loads onto the GPUs without any issues, but an error occurs at inference time when I run the sample example from the Gradio web interface.
Here is the error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)
The error seems to be caused by tensors being on different GPUs.
Environment
OS: Ubuntu
Python version: 3.10
CUDA version: 11.8
GPU model: Dual RTX 4090s
Steps to reproduce:
python -m llava.serve.controller --host 0.0.0.0 --port 10000
python3 -m llava.serve.model_worker --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path /home/lukas/Desktop/models/llava --num-gpus 2 --multi-modal
python -m llava.serve.gradio_web_server --controller http://localhost:10000
Run the sample example from the Gradio web interface
Here is the full log from running llava.serve.model_worker:
python3 -m llava.serve.model_worker --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path /home/lukas/Desktop/models/llava --num-gpus 2 --multi-modal
2023-04-19 21:08:00 | INFO | model_worker | args: Namespace(host='localhost', port=40000, worker_address='http://localhost:40000', controller_address='http://localhost:10000', model_path='/home/lukas/Desktop/models/llava', model_name=None, multi_modal=True, keep_aspect_ratio=False, num_gpus=2, limit_model_concurrency=5, stream_interval=2, no_register=False)
2023-04-19 21:08:00 | INFO | model_worker | Loading the model llava on worker 261b0e ...
2023-04-19 21:08:01.135613: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0
.
2023-04-19 21:08:01.155617: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'visual_projection.weight', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_projection.weight', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.embeddings.position_ids', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'logit_scale', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.11.layer_norm2.weight']
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] Loading checkpoint shards: 33%|██████ | 1/3 [00:04<00:08, 4.09s/it] Loading checkpoint shards: 67%|████████████ | 2/3 [00:08<00:04, 4.09s/it] Loading checkpoint shards: 100%|██████████████████| 3/3 [00:10<00:00, 3.13s/it] Loading checkpoint shards: 100%|██████████████████| 3/3 [00:10<00:00, 3.39s/it] 2023-04-19 21:08:13 | ERROR | stderr | Some weights of LlamaForCausalLM were not initialized from the model checkpoint at /home/lukas/Desktop/models/llava and are newly initialized: ['model.mm_projector.weight', 'model.mm_projector.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'visual_projection.weight', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_projection.weight', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.embeddings.position_ids', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'logit_scale', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.11.layer_norm2.weight']
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). 2023-04-19 21:08:15 | INFO | model_worker | Register to controller 2023-04-19 21:08:15 | ERROR | stderr | INFO: Started server process [167257] 2023-04-19 21:08:15 | ERROR | stderr | INFO: Waiting for application startup. 2023-04-19 21:08:15 | ERROR | stderr | INFO: Application startup complete. 2023-04-19 21:08:15 | ERROR | stderr | INFO: Uvicorn running on http://localhost:40000 (Press CTRL+C to quit) 2023-04-19 21:08:27 | INFO | stdout | INFO: 127.0.0.1:53996 - "POST /worker_get_status HTTP/1.1" 200 OK 2023-04-19 21:08:45 | INFO | model_worker | Send heart beat. Models: ['llava']. Semaphore: None. global_counter: 0 2023-04-19 21:09:09 | INFO | stdout | INFO: 127.0.0.1:58114 - "POST /worker_generate_stream HTTP/1.1" 200 OK 2023-04-19 21:09:10 | ERROR | stderr | ERROR: Exception in ASGI application 2023-04-19 21:09:10 | ERROR | stderr | Traceback (most recent call last): 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi 2023-04-19 21:09:10 | ERROR | stderr | result = await app( # type: ignore[func-returns-value] 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in call 2023-04-19 21:09:10 | ERROR | stderr | return await self.app(scope, receive, send) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/fastapi/applications.py", line 270, in call 2023-04-19 21:09:10 | ERROR | stderr | await super().call(scope, receive, send) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/applications.py", line 124, in call 2023-04-19 21:09:10 | ERROR | stderr | await self.middleware_stack(scope, receive, send) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in call 2023-04-19 21:09:10 | ERROR | stderr | raise exc 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in call 2023-04-19 21:09:10 | ERROR | stderr | await self.app(scope, receive, _send) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in call 2023-04-19 21:09:10 | ERROR | stderr | raise exc 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in call 2023-04-19 21:09:10 | ERROR | stderr | await self.app(scope, receive, sender) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in call 2023-04-19 21:09:10 | ERROR | stderr | raise e 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in call 2023-04-19 21:09:10 | ERROR | stderr | await self.app(scope, receive, send) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/routing.py", line 706, in call 2023-04-19 21:09:10 | ERROR | stderr | await route.handle(scope, receive, send) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle 2023-04-19 21:09:10 | ERROR | stderr | await self.app(scope, receive, send) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/routing.py", line 69, in app 2023-04-19 21:09:10 | ERROR | stderr | await response(scope, receive, send) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/responses.py", line 266, in call 2023-04-19 21:09:10 | ERROR | stderr | async with anyio.create_task_group() as task_group: 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 662, in aexit 2023-04-19 21:09:10 | ERROR | stderr | raise exceptions[0] 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/responses.py", line 269, in wrap 2023-04-19 21:09:10 | ERROR | stderr | await func() 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/responses.py", line 258, in stream_response 2023-04-19 21:09:10 | ERROR | stderr | async for chunk in self.body_iterator: 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/concurrency.py", line 63, in iterate_in_threadpool 2023-04-19 21:09:10 | ERROR | stderr | yield await anyio.to_thread.run_sync(_next, iterator) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync 2023-04-19 21:09:10 | ERROR | stderr | return await get_asynclib().run_sync_in_worker_thread( 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread 2023-04-19 21:09:10 | ERROR | stderr | return await future 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run 2023-04-19 21:09:10 | ERROR | stderr | result = context.run(func, *args) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/starlette/concurrency.py", line 53, in _next 2023-04-19 21:09:10 | ERROR | stderr | return next(iterator) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/Desktop/LLaVA/llava/serve/model_worker.py", line 295, in generate_stream_gate 2023-04-19 21:09:10 | ERROR | stderr | for x in self.generate_stream(params): 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context 2023-04-19 21:09:10 | ERROR | stderr | response = gen.send(None) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/Desktop/LLaVA/llava/serve/model_worker.py", line 234, in generate_stream 2023-04-19 21:09:10 | ERROR | stderr | out = model( 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl 2023-04-19 21:09:10 | ERROR | stderr | return forward_call(*args, **kwargs) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward 2023-04-19 21:09:10 | ERROR | stderr | output = old_forward(*args, **kwargs) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/Desktop/LLaVA/transformers/src/transformers/models/llama/modeling_llama.py", line 844, in forward 2023-04-19 21:09:10 | ERROR | stderr | outputs = self.model( 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl 2023-04-19 21:09:10 | ERROR | stderr | return forward_call(*args, **kwargs) 2023-04-19 21:09:10 | ERROR | stderr | File "/home/lukas/Desktop/LLaVA/transformers/src/transformers/models/llama/modeling_llama.py", line 631, in forward 2023-04-19 21:09:10 | ERROR | stderr | cur_new_input_embeds = torch.cat((cur_input_embeds[:image_start_token_pos+1], cur_image_features, cur_input_embeds[image_start_token_pos + num_patches + 1:]), dim=0) 2023-04-19 21:09:10 | ERROR | stderr | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)
same problem here.
Hi, thank you for your interest in our work. We are working on the multiple GPU support, and will try to provide a solution today. Thanks.
Hi, thank you for your interest in our work. We are working on the multiple GPU support, and will try to provide a solution today. Thanks.
Thank you so much! Really excited for this project, you guys are doing great work.
@valine Hi, it has been fixed and our code now supports inference with multiple GPUs.
I have tested it with 2x RTX 3090, and it works fine. Note that you need to reinstall our latest fork of transformers
in order for it to work.
See here for details.
@haotian-liu
Thanks for the quick update! I just tested it out, the model is loading and inference is working on my dual 4090s.
I am however getting strange results from the model. The sample image and prompt of the man ironing on the back of a taxi returns a strange description. It's not gibberish so the language model seems to be working, but something seems off with the image embeddings maybe.
A few outputs I got from the sample prompt:
"In the image, there is a person wearing a black cape and a black hat. The black cape is standing on a pile of clothes."
"In the image, there is a person wearing a black cape and holding a book. The scene is unusual because it features a large black dog wearing a cape in a room with a book and a bottle of wine. The dog is also holding a banana, which is an unexpected object to find in the image."
"The image depicts a scene where a man is using a cell phone to capture an interesting moment. However, the presence of the cell phone in the image is questionable, as it is placed in a unusual location and positioned at an unexpected angle. This combination of objects and scene elements suggest that the image has been staged with a fake cell phone placed in an unusual location, and a unclear context."
This is what I got on my 2x RTX 3090s.
"The unusual aspect of this image is that a man is standing on a car in the middle of the street while doing his laundry. He has a portable ironing board set up on the roof of the car, and he is ironing clothes. This scene is particularly odd because it is not typical to see someone ironing clothes in the middle of a street, especially on top of a car. Such an activity can cause traffic disruptions and is generally considered unsafe and inappropriate for an urban setting."
The issue you are seeing may be due to some version difference in tokenizer. Can you try creating a completely new Conda environment and try running the installation again? This may solve the issue.
Hmm no luck, when through the whole installation again, and this was the model response: "The unusual aspect of this image is the presence of a language model with a "funny" in it. Typically, you would expect to see a combination of letters, numbers, and underscoresultimate. However, I can only see a few lines of code, as I am not capable of understanding or analyzing the image.""
Sounds like the problem is something in my environment, thanks for your help.
Have you performed the model delta weight conversion? It seems that the model is not understanding the image patches correctly.
https://github.com/haotian-liu/LLaVA#llava-13b
Yeah I applied the delta to the llama 13b v1 weights.
This is the content of my llama13b folder that I applied the weights to:
config.json
pytorch_model.bin.index.json
generation_config.json
special_tokens_map.json
pytorch_model-00001-of-00003.bin
tokenizer_config.json
pytorch_model-00002-of-00003.bin
tokenizer.model
pytorch_model-00003-of-00003.bin
And the contents of the llava folder with the weights applied:
added_tokens.json
pytorch_model-00003-of-00003.bin
config.json
pytorch_model.bin.index.json
generation_config.json
special_tokens_map.json
pytorch_model-00001-of-00003.bin
tokenizer_config.json
pytorch_model-00002-of-00003.bin
tokenizer.model
Ok I got it working properly now. I re-applied the delta to the v2 llama13b and made sure to have your custom version of transformers installed.
Thanks again for your help!
Glad to hear that it works! You may close the issue :)
Out of curiosity, can you please explain what is v1 v2 of llama-13b? Thanks.
There’s nothing different about the weights between the versions as I understand it, I’m not sure if v1 and v2 is the proper terms for it. There have been multiple versions of the hugging face weights floating around with a different config files. The weights I originally applied the deltas to I downloaded back in early March which seem incompatible with your deltas for whatever reason.
Thanks for the explanation! And hope you enjoy playing with our LLaVA. Looking forward to more feedbacks :)
Hi, great work, since it is the same question, I won't post another issue
After having the correct version of transformer, I still get the error using multi gpus for inference:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3!
ENV: OS: Ubuntu Python version: 3.10 CUDA version: 11.7 GPU model: 4 x NVIDIA GeForce GTX 1080 Ti
Instead of running the web demo, I am writing a small script to have it reading the local image. Running with one gpu will output meaningful answer, so I assume the weight is okay
Any advice on how to solve this will be very helpful
@penghe2021 Can you paste the full error log here so that I can know which line is causing this issue (wrap them with ``` would make it easier to read)? Thanks!
Sure, here is the log
total gpu resources allocated: 1,2,3,4
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.0.layer_norm1.bias', 'visual_projection.weight', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'logit_scale', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.embeddings.position_ids', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_projection.weight', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.4.layer_norm1.weight']
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
NVIDIA GeForce GTX 1080 Ti
NVIDIA GeForce GTX 1080 Ti
NVIDIA GeForce GTX 1080 Ti
NVIDIA GeForce GTX 1080 Ti
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]
Loading checkpoint shards: 33%|███▎ | 1/3 [00:16<00:32, 16.48s/it]
Loading checkpoint shards: 67%|██████▋ | 2/3 [00:32<00:16, 16.24s/it]
Loading checkpoint shards: 100%|██████████| 3/3 [00:42<00:00, 13.26s/it]
Loading checkpoint shards: 100%|██████████| 3/3 [00:42<00:00, 14.09s/it]
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.0.layer_norm1.bias', 'visual_projection.weight', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'logit_scale', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.embeddings.position_ids', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_projection.weight', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.4.layer_norm1.weight']
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
File "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/LLaVA/test/test01.py", line 228, in <module>
output_string = model_worker.generate_reply(params)
File "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/LLaVA/test/test01.py", line 123, in generate_reply
out = model(
File "/nas/home/phe/anaconda3/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/nas/home/phe/anaconda3/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/nas/home/phe/anaconda3/envs/llava/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 844, in forward
outputs = self.model(
File "/nas/home/phe/anaconda3/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/nas/home/phe/anaconda3/envs/llava/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 617, in forward
cur_input_embeds = cur_input_embeds + (0. * dummy_image_features).sum()
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3!
In the test01.py line 123, is the same to this line https://github.com/haotian-liu/LLaVA/blob/2f439b5b019e8e7fe8b8147f05d4c71a079d65e4/llava/serve/model_worker.py#L232
Thanks for the help
I tried running this with multiple-gpu support worker (4 gpus to simulate your case, tried both text-only and multimodal input), it seems to be running fine. Would you mind confirming this for me by running the demo this way? This way, we may be able to isolate the issue.
And FYI, this is how we implemented the multiple GPU support in our worker: https://github.com/haotian-liu/LLaVA/blob/2f439b5b019e8e7fe8b8147f05d4c71a079d65e4/llava/serve/model_worker.py#L52-L59.
Sorry, I cannot start the website on server, I will try running it on cpu
Yes, I use the same code for multi gpu implementation.
Thanks for the help
same problems with current transformers installed by:
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers@cae78c46
my command:
python -m llava.serve.controller --host 0.0.0.0 --port 10000
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path /media/tensorboy/4df39736-fd86-443e-b6fe-a6ae0fe28d72/llama-dl-main/LLaVA-7B-v0 --multi-modal --num-gpus 2
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --share
logs:
2023-05-01 20:46:05 | ERROR | stderr | INFO: Started server process [3502503]
2023-05-01 20:46:05 | ERROR | stderr | INFO: Waiting for application startup.
2023-05-01 20:46:05 | ERROR | stderr | INFO: Application startup complete.
2023-05-01 20:46:05 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:40000 (Press CTRL+C to quit)
2023-05-01 20:46:20 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:46:35 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:46:50 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:47:05 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:47:20 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:47:35 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:47:48 | INFO | stdout | INFO: 127.0.0.1:56932 - "POST /worker_get_status HTTP/1.1" 200 OK
2023-05-01 20:47:50 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:48:05 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:48:20 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: None. global_counter: 0
2023-05-01 20:48:25 | INFO | model_worker | Send heart beat. Models: ['LLaVA-7B-v0']. Semaphore: Semaphore(value=4, locked=False). global_counter: 1
2023-05-01 20:48:25 | INFO | stdout | INFO: 127.0.0.1:57470 - "POST /worker_generate_stream HTTP/1.1" 200 OK
2023-05-01 20:48:27 | ERROR | stderr | ERROR: Exception in ASGI application
2023-05-01 20:48:27 | ERROR | stderr | Traceback (most recent call last):
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
2023-05-01 20:48:27 | ERROR | stderr | result = await app( # type: ignore[func-returns-value]
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
2023-05-01 20:48:27 | ERROR | stderr | return await self.app(scope, receive, send)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
2023-05-01 20:48:27 | ERROR | stderr | await super().__call__(scope, receive, send)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
2023-05-01 20:48:27 | ERROR | stderr | await self.middleware_stack(scope, receive, send)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
2023-05-01 20:48:27 | ERROR | stderr | raise exc
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
2023-05-01 20:48:27 | ERROR | stderr | await self.app(scope, receive, _send)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
2023-05-01 20:48:27 | ERROR | stderr | raise exc
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
2023-05-01 20:48:27 | ERROR | stderr | await self.app(scope, receive, sender)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
2023-05-01 20:48:27 | ERROR | stderr | raise e
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
2023-05-01 20:48:27 | ERROR | stderr | await self.app(scope, receive, send)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
2023-05-01 20:48:27 | ERROR | stderr | await route.handle(scope, receive, send)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
2023-05-01 20:48:27 | ERROR | stderr | await self.app(scope, receive, send)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/routing.py", line 69, in app
2023-05-01 20:48:27 | ERROR | stderr | await response(scope, receive, send)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/responses.py", line 270, in __call__
2023-05-01 20:48:27 | ERROR | stderr | async with anyio.create_task_group() as task_group:
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
2023-05-01 20:48:27 | ERROR | stderr | raise exceptions[0]
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap
2023-05-01 20:48:27 | ERROR | stderr | await func()
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/responses.py", line 262, in stream_response
2023-05-01 20:48:27 | ERROR | stderr | async for chunk in self.body_iterator:
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/concurrency.py", line 63, in iterate_in_threadpool
2023-05-01 20:48:27 | ERROR | stderr | yield await anyio.to_thread.run_sync(_next, iterator)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
2023-05-01 20:48:27 | ERROR | stderr | return await get_asynclib().run_sync_in_worker_thread(
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
2023-05-01 20:48:27 | ERROR | stderr | return await future
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
2023-05-01 20:48:27 | ERROR | stderr | result = context.run(func, *args)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/starlette/concurrency.py", line 53, in _next
2023-05-01 20:48:27 | ERROR | stderr | return next(iterator)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/LLaVA/llava/serve/model_worker.py", line 292, in generate_stream_gate
2023-05-01 20:48:27 | ERROR | stderr | for x in self.generate_stream(params):
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 43, in generator_context
2023-05-01 20:48:27 | ERROR | stderr | response = gen.send(None)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/LLaVA/llava/serve/model_worker.py", line 239, in generate_stream
2023-05-01 20:48:27 | ERROR | stderr | out = model(
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
2023-05-01 20:48:27 | ERROR | stderr | return forward_call(*input, **kwargs)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
2023-05-01 20:48:27 | ERROR | stderr | output = old_forward(*args, **kwargs)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/LLaVA/llava/model/llava.py", line 218, in forward
2023-05-01 20:48:27 | ERROR | stderr | outputs = self.model(
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/anaconda3/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
2023-05-01 20:48:27 | ERROR | stderr | return forward_call(*input, **kwargs)
2023-05-01 20:48:27 | ERROR | stderr | File "/home/tensorboy/LLaVA/llava/model/llava.py", line 159, in forward
2023-05-01 20:48:27 | ERROR | stderr | cur_new_input_embeds = torch.cat((cur_input_embeds[:image_start_token_pos+1], cur_image_features, cur_input_embeds[image_start_token_pos + num_patches + 1:]), dim=0)
2023-05-01 20:48:27 | ERROR | stderr | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_cat)