localGPT icon indicating copy to clipboard operation
localGPT copied to clipboard

Also i receive this warning

Open lelapin123 opened this issue 1 year ago • 16 comments

I receive this error message in a new conda environnement:

C:\Users(myname)\anaconda3\envs\privategpt\lib\site-packages\transformers\generation\utils.py:1255: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) warnings.warn(

conda project is called privategpt but it is in fact fully dedicated to localgpt

lelapin123 avatar May 28 '23 13:05 lelapin123

I will have a look at it. Need to update the configuration.

PromtEngineer avatar May 29 '23 23:05 PromtEngineer

same issue here

vaylonn avatar May 30 '23 12:05 vaylonn

same warning here on m1pro macbook

ml2s avatar May 30 '23 21:05 ml2s

Same here.

I am on Win10, 3060 12GB

Eleksar387 avatar May 31 '23 20:05 Eleksar387

same warning on m1pro macbook : UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) warnings.warn(

One-one-one-learner avatar Jun 03 '23 11:06 One-one-one-learner

I also wish to add that after the warning, nothing happens... I launch with CPU option so it may be that it's taking more time, but so much time that it seems it'll never give me an answer (I tried leaving it "thinking" for quite a long time already, and nothing is happening).

pmorange avatar Jun 05 '23 07:06 pmorange

Same on a 16GB M2 Pro

/Users/dedelner/Development/localGPT/venv/lib/python3.11/site-packages/transformers/generation/utils.py:1255: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
  warnings.warn(

DeDelner avatar Jun 05 '23 10:06 DeDelner

Me too. I'm using an apple m1 pro

maotoledo avatar Jun 05 '23 14:06 maotoledo

Same here, Windows 11 Cuda 11.8

conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

barbarosalp avatar Jun 06 '23 20:06 barbarosalp

same on ubuntu 22.04

duramaxlb7 avatar Jun 08 '23 04:06 duramaxlb7

same here, mac pro intel

yihchu avatar Jun 08 '23 06:06 yihchu

same here

sesam123 avatar Jun 08 '23 21:06 sesam123

same on Mac book pro M2

bidachon avatar Jun 09 '23 04:06 bidachon

So the question remains: Is it working on any platform ? :-D

pmorange avatar Jun 09 '23 07:06 pmorange

@pmorange The answer is yes, I have been able to get it to run after I received the above error message. Locally run on Ubuntu 22.04, GTX 1070.

I was experiencing a fail/exit each time the model loaded with a distinct pattern. Memory use by the GPU would reach 100%, then swap volume (originally 25GB) usage would increase to 100% (i.e. 25GB) and then fail.

I moved my partitions around and increased the swap volume to 125 GB and have successfully loaded the model each time after increasing the swap size. Swap volume usage seems to run around 33GB. Total Memory usage is 16.7GB (GTX 1070) + ~33GB swap volume.

Running directly in CLI, I was able to generate output using the pre-loaded constitution doc, 3x. times.

I'm not sure where my logs are being stored so couldn't figure out how long it was taking to generate the output. To resolve this, I loaded the model in VSC this morning (to better track timestamps, which I also can't seem to figure out).

This is the first time I have received the warning after increasing the swap size. [edit: I'm going to wait to see if output can be generated in spite of the warning, then will re-run in CLI to test if I get the warning message].

duramaxlb7 avatar Jun 09 '23 14:06 duramaxlb7

I encountered the same issue following a query, and the response hung. After a few minutes the model responded.

StewartAragon avatar Jun 10 '23 02:06 StewartAragon