Alpaca-LoRA-Serve icon indicating copy to clipboard operation
Alpaca-LoRA-Serve copied to clipboard

Can't run starchat: fails with `AttributeError: module 'global_vars' has no attribute 'gen_config'`

Open nathanielastudillo opened this issue 1 year ago • 13 comments

Trying to run starchat and getting an error. The model downloaded but when I click "Confirm", I just get an error.

Update: Also getting this error when trying to download models. Might be an env issue, going to nuke my conda env and try again.

nathanielastudillo avatar May 31 '23 17:05 nathanielastudillo

Wiped my conda env, tried to install another model (alpacoom-7b), getting the same error:

AttributeError: module 'global_vars' has no attribute 'gen_config'

nathanielastudillo avatar May 31 '23 20:05 nathanielastudillo

There might be some other error before this. Maybe the model could not be loaded because of memory limits. Could you check the error log?

kdubba avatar May 31 '23 20:05 kdubba

Here is a part of the error detail before the the error. I have made a few changes to the file to run it in Google Colab, bit the error is the same as reported earlier.

File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2969, in _load_pretrained_model raise ValueError( ValueError: The current device_map had weights offloaded to the disk. Please provide an offload_folder for them. Alternatively, make sure you have safetensors installed if the model you are using offers the weights in this format. Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/gradio/routes.py", line 427, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1323, in process_api result = await self.call_function( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/content/LLM-As-Chatbot/app.py", line 218, in move_to_third_view gen_config = global_vars.gen_config AttributeError: module 'global_vars' has no attribute 'gen_config'

sandeepraizada avatar Jun 02 '23 12:06 sandeepraizada

what kind of changes have you made?

deep-diver avatar Jun 02 '23 13:06 deep-diver

also, notice that I make updates very frequently, so please git pull whenever you try

deep-diver avatar Jun 02 '23 13:06 deep-diver

I have used Google colab so

  1. used the conda install there
  2. changed the app.py and set shared= True to get a public link. however i was not using a GPU.. now with a GPU... here is the second error message File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 167, in from_pretrained PeftConfig.from_pretrained(model_id, subfolder=kwargs.get("subfolder", None), **kwargs).peft_type File "/usr/local/lib/python3.10/site-packages/peft/utils/config.py", line 110, in from_pretrained raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'") ValueError: Can't find 'adapter_config.json' at 'LLMs/Alpaca-LoRA-7B-elina' Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/gradio/routes.py", line 427, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1323, in process_api result = await self.call_function( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/content/LLM-As-Chatbot/app.py", line 218, in move_to_third_view gen_config = global_vars.gen_config AttributeError: module 'global_vars' has no attribute 'gen_config'

sandeepraizada avatar Jun 02 '23 13:06 sandeepraizada

I would not recommend to use Colab since the connection is not stable. This error is due to the unstable connection that will eventually be terminated shortly (I don't know why)

deep-diver avatar Jun 02 '23 13:06 deep-diver

It works with t5-vicuna though.

sandeepraizada avatar Jun 02 '23 13:06 sandeepraizada

yeah since t5-vicuna is the smallest model which don't take too much time to load up

deep-diver avatar Jun 02 '23 13:06 deep-diver

thought as much to try with Vicuna being a smaller model. thanks for your response!

sandeepraizada avatar Jun 02 '23 13:06 sandeepraizada

T5-Vicuna is 3B model, hence :)

deep-diver avatar Jun 02 '23 13:06 deep-diver

I got the same error when running on Gitpod (the 16Gb container), trying to run on CPU mode, the only changes:

# for torch
pip install torch==2.0.0+cpu torchvision==0.15.1+cpu -f https://download.pytorch.org/whl/torch_stable.html

# for auto-gptq
BUILD_CUDA_EXT=0 pip install auto-gptq 

using Python v3.11.1.

bitsnaps avatar Aug 22 '23 09:08 bitsnaps

In my case the download_complete was getting the model_name and model_base with html tags, I cleaned up with the following

    print(f"model_name: {model_name}")
    print(f"model_base: {model_base}")
    model_name = model_name.replace("<h2>","").replace("</h2>","").strip()
    model_base = clean_up(model_base)
    model_ckpt = clean_up(model_ckpt)
    model_gptq = clean_up(model_gptq)
    

The clean_up function is below

def clean_up(model_base):
    pattern = r":\s*(\w+/[\w-]+)"
    match = re.search(pattern, model_base)

    result = match.group(1) if match else model_base
    return result

gavi avatar Oct 14 '23 14:10 gavi