你好,我对你的工作非常感兴趣。但是我在运行代码的时候出现了一个问题。我使用了一块48G的A6000显卡,CUDA版本是12.2,请问您有解决思路吗?谢谢。
Traceback (most recent call last):
File "/hy-tmp/VSA/app.py", line 102, in
vsa = VisionSearchAssistant()
File "/hy-tmp/VSA/models/vsa_model.py", line 298, in init
self.vlm = VLM(
File "/hy-tmp/VSA/models/vsa_model.py", line 132, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "/hy-tmp/VSA/models/LLaVA/llava/model/builder.py", line 118, in load_pretrained_model
model = LlavaLlamaForCausalLM.from_pretrained(
File "/usr/local/miniconda3/envs/vsa/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3820, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/usr/local/miniconda3/envs/vsa/lib/python3.10/site-packages/accelerate/big_modeling.py", line 496, in dispatch_model
model.to(device)
File "/usr/local/miniconda3/envs/vsa/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2702, in to
raise ValueError(
ValueError: .to is not supported for 4-bit or 8-bit bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct dtype.