gsgoldma

Results 35 comments of gsgoldma

> Maybe worth noting there's a fork of the paint-with-words repository which uses transformer pipelines: [paint-with-words-pipelines](https://github.com/shunk031/paint-with-words-pipeline). > > I've created a minimal example on top of it which can be...

is there also a way to fix this CL issue? I think I saw something about it, but I can't find where. C:\Users\Gregory\.conda\envs\llama4bit\lib\site-packages\torch\utils\cpp_extension.py:359: UserWarning: Error checking compiler version for cl:...

> is there also a way to fix this CL issue? I think I saw something about it, but I can't find where. C:\Users\Gregory.conda\envs\llama4bit\lib\site-packages\torch\utils\cpp_extension.py:359: UserWarning: Error checking compiler version for...

is 980ti gtx nvidia card supported?

> You have to do the symbolic link dirty business. WSL's implementation of graphics drivers (including CUDA) is very... special. is there a link to what you're talking about?

I wonder if my Nvidia 980ti gtx 6 gig graphics card is just not supported in this repo (textgen) freddy@SD:~/text-generation-webui/repositories/GPTQ-for-LLaMa$ python setup_cuda.py install No CUDA runtime is found, using CUDA_HOME='/home/freddy/miniconda3/envs/textgen'...

> A 960? AFAIK, recent era CUDA builds only support back to the 10 series. > > e.g. even if you compile libcudaall for bitsandbytes it'll still fail on an...

shouldnt that be changed if its causing this error? edit: new error after I did your command (textgen) PS D:\text-generation-webui\repositories\gptq-for-llama> python setup_cuda.py install Traceback (most recent call last): File "D:\text-generation-webui\repositories\gptq-for-llama\setup_cuda.py",...