MiniGPT-4 icon indicating copy to clipboard operation
MiniGPT-4 copied to clipboard

MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models

Results 328 MiniGPT-4 issues
Sort by recently updated
recently updated
newest added

Traceback (most recent call last): File "/data/zhangjie_data_n/zhangjie_data/MiniGPT-4/demo.py", line 60, in model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id)) File "/data/zhangjie_data_n/zhangjie_data/MiniGPT-4/minigpt4/models/mini_gpt4.py", line 243, in from_config model = cls( File "/data/zhangjie_data_n/zhangjie_data/MiniGPT-4/minigpt4/models/mini_gpt4.py", line 86, in __init__ self.llama_tokenizer =...

Traceback (most recent call last): File "/data/zhangjie_data_n/zhangjie_data/MiniGPT-4/demo.py", line 60, in model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id)) File "/data/zhangjie_data_n/zhangjie_data/MiniGPT-4/minigpt4/models/mini_gpt4.py", line 243, in from_config model = cls( File "/data/zhangjie_data_n/zhangjie_data/MiniGPT-4/minigpt4/models/mini_gpt4.py", line 86, in __init__ self.llama_tokenizer =...

Loading VIT Traceback (most recent call last): File "C:\Users\Ge Yunxiang\.conda\envs\minigpt4\lib\urllib\request.py", line 1346, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "C:\Users\Ge Yunxiang\.conda\envs\minigpt4\lib\http\client.py", line 1285, in request self._send_request(method, url, body, headers, encode_chunked)...

Thank you for your awesome work ! Can you share the training log of stage-1 and stage-2 ? Thank you !

Hi, I see that the public demo at huggingface spaces is very crowded. Here I have built another public demo website at https://ws-4e7ab832-7a9c-4162-984a-f6b39dafc615-debug.rde-ws.vke-bj.lanrui-ai.com . Do you want to add this...

I have combined llama13b weight and vicuna_delta weight , but the following problem has occurred. The model's answer is very confusing. Do I suspect that llama13b weight a may have...

What about using vicuna is CPU mode and use GPU for what is not Vicuna related ? llama.cpp doesnt use the vram

When I tried uploading an image, I got this error below. What's the fix for this issue? Can you give me some suggestions, please? RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling...

I want to finetune on 3090*4/8 (stage two), but there is not enough VRAM... Is there any way to perform finetuning in these circumstances?

OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 15.89 GiB total capacity; 14.38 GiB already allocated; 6.12 MiB free; 14.68 GiB reserved in total by PyTorch)...