transformers icon indicating copy to clipboard operation
transformers copied to clipboard

OSError: When i am loading pszemraj\flan-t5-large-grammar-synthesis from hugging face

Open akesh1235 opened this issue 1 year ago β€’ 2 comments

D:\gramformer>python -m uvicorn nova_grammar_corrector:app --reload ←[32mINFO←[0m: Will watch for changes in these directories: ['D:\gramformer '] ←[32mINFO←[0m: Uvicorn running on ←[1mhttp://127.0.0.1:8000←[0m (Press CTRL+ C to quit) ←[32mINFO←[0m: Started reloader process [←[36m←[1m13980←[0m] using ←[36m←[1m StatReload←[0m Downloading spiece.model: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 792k/792k [00:00<00:00, 39.6MB/s] C:\Python37\lib\site-packages\huggingface_hub\file_download.py:133: UserWarning: huggingface_hub cache-system uses symlinks by default to efficiently store du plicated files but your machine does not support them in C:\Users\devblr.cache
huggingface\hub. Caching files will still work but in a degraded version that mi ght require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING environment variable. For more details, see h ttps://huggingface.co/docs/huggingface_hub/how-to-cache#limitations. To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see th is article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-you r-device-for-development warnings.warn(message) Downloading (.)cial_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.20k/2.20k [00:00<?, ?B/s] Downloading (.)okenizer_config.json: 100%|β–ˆ| 2.56k/2.56k [00:00<00:00, 164kB/s] Downloading (.)lve/main/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 892/892 [00:00<?, ?B/s] Downloading pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.13G/3.13G [00:26<00:00, 118MB/s] Process SpawnProcess-1: Traceback (most recent call last): File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 446, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Python37\lib\site-packages\torch\serialization.py", line 789, in load

return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args

) File "C:\Python37\lib\site-packages\torch\serialization.py", line 1131, in _lo ad result = unpickler.load() File "C:\Python37\lib\site-packages\torch\serialization.py", line 1101, in per sistent_load load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "C:\Python37\lib\site-packages\torch\serialization.py", line 1079, in loa d_tensor storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage ).storage().untyped() RuntimeError: [enforce fail at C:\actions-runner_work\pytorch\pytorch\builder\w indows\pytorch\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not en ough memory: you tried to allocate 11534336 bytes.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 450, in load_state_dict if f.read(7) == "version": File "C:\Python37\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1827: cha racter maps to

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Python37\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "C:\Python37\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "C:\Python37\lib\site-packages\uvicorn_subprocess.py", line 76, in subpr ocess_started target(sockets=sockets) File "C:\Python37\lib\site-packages\uvicorn\server.py", line 61, in run return asyncio.run(self.serve(sockets=sockets)) File "C:\Python37\lib\asyncio\runners.py", line 43, in run return loop.run_until_complete(main) File "C:\Python37\lib\asyncio\base_events.py", line 583, in run_until_complete

return future.result()

File "C:\Python37\lib\site-packages\uvicorn\server.py", line 68, in serve config.load() File "C:\Python37\lib\site-packages\uvicorn\config.py", line 473, in load self.loaded_app = import_from_string(self.app) File "C:\Python37\lib\site-packages\uvicorn\importer.py", line 21, in import_f rom_string module = importlib.import_module(module_str) File "C:\Python37\lib\importlib_init_.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 967, in _find_and_load_unlocked File "", line 677, in _load_unlocked File "", line 728, in exec_module File "", line 219, in _call_with_frames_removed File "D:\gramformer\nova_grammar_corrector.py", line 273, in ngc = nova_grammar_corrector(models=1, use_gpu=False) File "D:\gramformer\nova_grammar_corrector.py", line 161, in init self.correction_model = T5ForConditionalGeneration.from_pretrained(corre ction_model_tag, use_auth_token=False) File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 2542 , in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 463, in load_state_dict f"Unable to load weights from pytorch checkpoint file for '{checkpoint_file} "OSError:`` Unable to load weights from pytorch checkpoint file for 'C:\Users\devbl r/.cache\huggingface\hub\models--pszemraj--flan-t5-large-grammar-synthesis\snaps hots\d45c90f835904f6c3fdf320e74fa6e894e960871\pytorch_model.bin' at 'C:\Users\de vblr/.cache\huggingface\hub\models--pszemraj--flan-t5-large-grammar-synthesis\sn apshots\d45c90f835904f6c3fdf320e74fa6e894e960871\pytorch_model.bin'. If you trie d to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.```

akesh1235 avatar May 31 '23 13:05 akesh1235

You do not have enough CPU RAM to open the model checkpoint.

sgugger avatar May 31 '23 13:05 sgugger

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Jun 30 '23 15:06 github-actions[bot]