llm-foundry icon indicating copy to clipboard operation
llm-foundry copied to clipboard

Model loading on local machine

Open Devangkaruskar opened this issue 1 year ago • 4 comments

Question

i am trying to use mode through Hugging face pipe line but model didn't load, my code line is llm = HuggingFacePipeline.from_model_id(model_id='mosaicml/mpt-7b-instruct',task="text-generation",trust_remote_code=True)

ValueError: Loading mosaicml/mpt-7b-instruct requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option **trust_remote_code=True** to remove this error.

Another way i tried using: config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b-chat', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton'

model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-chat', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0')

but in this also i am facing error: ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run pip install flash_attn

although i have already installed triton, einops as well

i want run model on local pc. please help

Devangkaruskar avatar Jun 01 '23 14:06 Devangkaruskar

can you show more of the error print out? I'm trying to figure out which file throws this error

Note: for triton, you should install this version of it: triton-pre-mlir@git+https://github.com/vchiley/triton.git@triton_pre_mlir_sm90#subdirectory=python

since you're setting config.attn_config['attn_impl'] = 'triton' I'm not sure what is looking for the flash_attn package It might be loss_fn: fused_crossentropy??? 🤷‍♂️ not sure.

vchiley avatar Jun 01 '23 23:06 vchiley

llm = HuggingFacePipeline.from_model_id(model_id='mosaicml/mpt-7b-instruct',task="text-generation",trust_remote_code=True)

Error:karuskar/Devang_new/untitled0.py', wdir='C:/Users/dkaruskar/Devang_new') Reloaded modules: transformers_modules, transformers_modules.mosaicml, transformers_modules.mosaicml.mpt-7b-chat, transformers_modules.mosaicml.mpt-7b-chat.001074605821e6205a796c989c68b3794c8b4572, transformers_modules.mosaicml.mpt-7b-chat.001074605821e6205a796c989c68b3794c8b4572.configuration_mpt Downloading: 0%| | 0.00/237 [00:00<?, ?B/s] Downloading: 0%| | 0.00/2.11M [00:00<?, ?B/s] Downloading: 0%| | 0.00/99.0 [00:00<?, ?B/s] Downloading: 0%| | 0.00/1.23k [00:00<?, ?B/s] Traceback (most recent call last):

File ~\AppData\Local\anaconda3\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec exec(code, globals, locals)

File c:\users\dkaruskar\devang_new\untitled0.py:14 llm = HuggingFacePipeline.from_model_id(model_id='mosaicml/mpt-7b-instruct',task="text-generation",trust_remote_code=True)

File ~\AppData\Local\anaconda3\lib\site-packages\langchain\llms\huggingface_pipeline.py:88 in from_model_id model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)

File ~\AppData\Local\anaconda3\lib\site-packages\transformers\models\auto\auto_factory.py:434 in from_pretrained config, kwargs = AutoConfig.from_pretrained(

File ~\AppData\Local\anaconda3\lib\site-packages\transformers\models\auto\configuration_auto.py:779 in from_pretrained raise ValueError(

ValueError: Loading mosaicml/mpt-7b-instruct requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.

Devangkaruskar avatar Jun 07 '23 18:06 Devangkaruskar

import torch import transformers

name = 'mosaicml/mpt-7b-chat'

config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' config.init_device = 'cuda:0' # For fast initialization directly on GPU!

model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True )

Error: runfile('C:/Users/dkaruskar/Devang_new/untitled0.py', wdir='C:/Users/dkaruskar/Devang_new') Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Traceback (most recent call last):

File ~\AppData\Local\anaconda3\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec exec(code, globals, locals)

File c:\users\dkaruskar\devang_new\untitled0.py:28 model = transformers.AutoModelForCausalLM.from_pretrained(

File ~\AppData\Local\anaconda3\lib\site-packages\transformers\models\auto\auto_factory.py:455 in from_pretrained model_class = get_class_from_dynamic_module(

File ~\AppData\Local\anaconda3\lib\site-packages\transformers\dynamic_module_utils.py:363 in get_class_from_dynamic_module final_module = get_cached_module_file(

File ~\AppData\Local\anaconda3\lib\site-packages\transformers\dynamic_module_utils.py:274 in get_cached_module_file get_cached_module_file(

File ~\AppData\Local\anaconda3\lib\site-packages\transformers\dynamic_module_utils.py:274 in get_cached_module_file get_cached_module_file(

File ~\AppData\Local\anaconda3\lib\site-packages\transformers\dynamic_module_utils.py:237 in get_cached_module_file modules_needed = check_imports(resolved_module_file)

File ~\AppData\Local\anaconda3\lib\site-packages\transformers\dynamic_module_utils.py:134 in check_imports raise ImportError(

ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run pip install flash_attn

Devangkaruskar avatar Jun 07 '23 18:06 Devangkaruskar

@vchiley

Devangkaruskar avatar Jun 09 '23 11:06 Devangkaruskar

Hi, were you able to resolve this?

dakinggg avatar Sep 07 '23 02:09 dakinggg

Closing due to inactivity. Please open a new issue if you are still encountering problems.

dakinggg avatar Sep 09 '23 23:09 dakinggg