InconsolableCellist

Results 40 comments of InconsolableCellist

Using 2120e82 I was able to get the model to load, but it segfaulted again as soon as I tried an inference in the gradio UI, resulting in "NETWORK ERROR...

I'm currently running the worker with `CUDA_VISIBLE_DEVICES=0 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./models/llava-v1.6-mistral-7b` and the demo is working, so far no seg fault

I updated, did `pip install -e .` and ran with `CUDA_VISIBLE_DEVICES=0,1 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./models/llava-v1.6 -34b --load-8bit` And I get a...

Mistral 7B worked, probably because I was able to load it into just one GPU. It didn't do a very good job at anything though, nor did 4-bit 34B. I...

What version of Python do you have in your environment? I got this error when I used something higher than 3.10

All GTK+ file pickers/choosers, Qt, KDE, and all popular desktop applications support "~" expansion. E.g., right click this page in Firefox or Chrome, type ~/somefile.txt and it'll save it in...

According to a comment in the forum (https://forum.audacityteam.org/t/bug-audacity-fails-to-parse-tilde-for-users-home-directory-in-export-audio-dialog/106141) this may be a regression after 3.4.2

I can give this a try again soon. I had updated my nvidia drivers and llama.cpp and started getting kernel panics due to a segfault in libc related to what...

Thanks for the #7 fix, it worked for me. I didn't run into #8 for whatever reason.

:+1: Super important for deaf and hard-of-hearing people as well.