Timon Käch

Results 73 comments of Timon Käch

This is the .env: ```################################################################################ ### LLM PROVIDER ################################################################################ ### OPENAI ## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key) ## TEMPERATURE - Sets temperature in OpenAI (Default: 0) ## USE_AZURE...

Thank you it fixed the issue but there's a new issue now: `ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the...

`To create a public link, set `share=True` in `launch()`. Traceback (most recent call last): File "/home/cybertimon/miniconda3/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict output = await app.get_blocks().process_api( File "/home/cybertimon/miniconda3/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api...

`(base) cybertimon@server:~/Repositorys/text-generation-webui$ python3 server.py --model llava-13b-4bit-128g --gpu-memory 12 --wbits 4 --model_type llama --groupsize 128 --listen-host 0.0.0.0 --listen --xformers --extension llava --chat --listen-port 21129 Gradio HTTP request redirected to localhost :)...

Also, when I change the settings to use cpu ´{'add_all_images_to_prompt': False, 'clip_device': 'cpu', 'clip_bits': 32, 'projector_device': 'cpu', 'projector_bits': 32} cpu torch.float32 cpu torch.float32´ I still get only 888888 as answer.

> @CyberTimon remove settings.json, then restart webui, clear the history, and try with this image: https://github.com/haotian-liu/LLaVA/blob/main/llava/serve/examples/extreme_ironing.jpg, with "What is unusual about this image?" prompt, exactly as in my video. If...

Oh I saw what the issue was. When selecting max_new_tokens over 1600 it generates only garbage.

Thanks for fixing the prompt return. This was driving me crazy lol I couldn't figure out why it always returned the prompt.

Try using --gpu-memory 12. Also, do you have a second gpu? Are you using the latest gpt q branch or the standard oobabooga one?

It still doesn't work