localGPT
localGPT copied to clipboard
Also i receive this warning
I receive this error message in a new conda environnement:
C:\Users(myname)\anaconda3\envs\privategpt\lib\site-packages\transformers\generation\utils.py:1255: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) warnings.warn(
conda project is called privategpt but it is in fact fully dedicated to localgpt
I will have a look at it. Need to update the configuration.
same issue here
same warning here on m1pro macbook
Same here.
I am on Win10, 3060 12GB
same warning on m1pro macbook : UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) warnings.warn(
I also wish to add that after the warning, nothing happens... I launch with CPU option so it may be that it's taking more time, but so much time that it seems it'll never give me an answer (I tried leaving it "thinking" for quite a long time already, and nothing is happening).
Same on a 16GB M2 Pro
/Users/dedelner/Development/localGPT/venv/lib/python3.11/site-packages/transformers/generation/utils.py:1255: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Me too. I'm using an apple m1 pro
Same here, Windows 11 Cuda 11.8
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
same on ubuntu 22.04
same here, mac pro intel
same here
same on Mac book pro M2
So the question remains: Is it working on any platform ? :-D
@pmorange The answer is yes, I have been able to get it to run after I received the above error message. Locally run on Ubuntu 22.04, GTX 1070.
I was experiencing a fail/exit each time the model loaded with a distinct pattern. Memory use by the GPU would reach 100%, then swap volume (originally 25GB) usage would increase to 100% (i.e. 25GB) and then fail.
I moved my partitions around and increased the swap volume to 125 GB and have successfully loaded the model each time after increasing the swap size. Swap volume usage seems to run around 33GB. Total Memory usage is 16.7GB (GTX 1070) + ~33GB swap volume.
Running directly in CLI, I was able to generate output using the pre-loaded constitution doc, 3x. times.
I'm not sure where my logs are being stored so couldn't figure out how long it was taking to generate the output. To resolve this, I loaded the model in VSC this morning (to better track timestamps, which I also can't seem to figure out).
This is the first time I have received the warning after increasing the swap size. [edit: I'm going to wait to see if output can be generated in spite of the warning, then will re-run in CLI to test if I get the warning message].
I encountered the same issue following a query, and the response hung. After a few minutes the model responded.