MiniGPTV2 Text to Text
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
Initializing Chat
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:16<00:00, 8.37s/it]
trainable params: 33554432 || all params: 6771970048 || trainable%: 0.49548996469513035
Position interpolate from 16x16 to 32x32
Load Minigpt-4-LLM Checkpoint: minigpt_llma2-v2/minigptv2_checkpoint.pth
/home/kimo/EL GP/miniChatGpt4/MiniGPT-4/demo_v2.py:547: GradioDeprecationWarning: 'scale' value should be an integer. Using 0.5 will cause issues.
with gr.Column(scale=0.5):
/home/kimo/EL GP/miniChatGpt4/MiniGPT-4/demo_v2.py:647: GradioDeprecationWarning: The enable_queue parameter has been deprecated. Please use the .queue() method instead.
demo.launch(share=True, enable_queue=True)
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://dc10fbe0468a690cd3.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Traceback (most recent call last):
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/gradio/queueing.py", line 406, in call_prediction
output = await route_utils.call_process_api(
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/gradio/route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/gradio/blocks.py", line 1554, in process_api
result = await self.call_function(
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/gradio/blocks.py", line 1206, in call_function
prediction = await utils.async_iteration(iterator)
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/gradio/utils.py", line 517, in async_iteration
return await iterator.anext()
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/gradio/utils.py", line 510, in anext
return await anyio.to_thread.run_sync(
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/gradio/utils.py", line 493, in run_sync_iterator_async
return next(iterator)
File "/root/anaconda3/envs/minigptv/lib/python3.9/site-packages/gradio/utils.py", line 647, in gen_wrapper
yield from f(*args, **kwargs)
File "/home/kimo/EL GP/miniChatGpt4/MiniGPT-4/demo_v2.py", line 468, in gradio_stream_answer
streamer = chat.stream_answer(conv=chat_state,
File "/home/kimo/EL GP/miniChatGpt4/MiniGPT-4/minigpt4/conversation/conversation.py", line 197, in stream_answer
generation_kwargs = self.answer_prepare(conv, img_list, **kargs)
File "/home/kimo/EL GP/miniChatGpt4/MiniGPT-4/minigpt4/conversation/conversation.py", line 162, in answer_prepare
embs = self.model.get_context_emb(prompt, img_list)
File "/home/kimo/EL GP/miniChatGpt4/MiniGPT-4/minigpt4/models/minigpt_base.py", line 69, in get_context_emb
device = img_list[0].device
IndexError: list index out of range
When trying to send text only without uploading an image, it gives me this error. For the online version of MiniGPT_v2, it works fine.