transformers
transformers copied to clipboard
OSError: When i am loading pszemraj\flan-t5-large-grammar-synthesis from hugging face
D:\gramformer>python -m uvicorn nova_grammar_corrector:app --reload
β[32mINFOβ[0m: Will watch for changes in these directories: ['D:\gramformer
']
β[32mINFOβ[0m: Uvicorn running on β[1mhttp://127.0.0.1:8000β[0m (Press CTRL+
C to quit)
β[32mINFOβ[0m: Started reloader process [β[36mβ[1m13980β[0m] using β[36mβ[1m
StatReloadβ[0m
Downloading spiece.model: 100%|βββββββββββββ| 792k/792k [00:00<00:00, 39.6MB/s]
C:\Python37\lib\site-packages\huggingface_hub\file_download.py:133: UserWarning:
huggingface_hub
cache-system uses symlinks by default to efficiently store du
plicated files but your machine does not support them in C:\Users\devblr.cache
huggingface\hub. Caching files will still work but in a degraded version that mi
ght require more space on your disk. This warning can be disabled by setting the
HF_HUB_DISABLE_SYMLINKS_WARNING
environment variable. For more details, see h
ttps://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to
run Python as an administrator. In order to see activate developer mode, see th
is article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-you
r-device-for-development
warnings.warn(message)
Downloading (.)cial_tokens_map.json: 100%|ββββββββ| 2.20k/2.20k [00:00<?, ?B/s]
Downloading (.)okenizer_config.json: 100%|β| 2.56k/2.56k [00:00<00:00, 164kB/s]
Downloading (.)lve/main/config.json: 100%|ββββββββββββ| 892/892 [00:00<?, ?B/s]
Downloading pytorch_model.bin: 100%|βββββββ| 3.13G/3.13G [00:26<00:00, 118MB/s]
Process SpawnProcess-1:
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 446,
in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "C:\Python37\lib\site-packages\torch\serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args
) File "C:\Python37\lib\site-packages\torch\serialization.py", line 1131, in _lo ad result = unpickler.load() File "C:\Python37\lib\site-packages\torch\serialization.py", line 1101, in per sistent_load load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "C:\Python37\lib\site-packages\torch\serialization.py", line 1079, in loa d_tensor storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage ).storage().untyped() RuntimeError: [enforce fail at C:\actions-runner_work\pytorch\pytorch\builder\w indows\pytorch\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not en ough memory: you tried to allocate 11534336 bytes.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\transformers\modeling_utils.py", line 450,
in load_state_dict
if f.read(7) == "version":
File "C:\Python37\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1827: cha
racter maps to
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Python37\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "C:\Python37\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "C:\Python37\lib\site-packages\uvicorn_subprocess.py", line 76, in subpr ocess_started target(sockets=sockets) File "C:\Python37\lib\site-packages\uvicorn\server.py", line 61, in run return asyncio.run(self.serve(sockets=sockets)) File "C:\Python37\lib\asyncio\runners.py", line 43, in run return loop.run_until_complete(main) File "C:\Python37\lib\asyncio\base_events.py", line 583, in run_until_complete
return future.result()
File "C:\Python37\lib\site-packages\uvicorn\server.py", line 68, in serve
config.load()
File "C:\Python37\lib\site-packages\uvicorn\config.py", line 473, in load
self.loaded_app = import_from_string(self.app)
File "C:\Python37\lib\site-packages\uvicorn\importer.py", line 21, in import_f
rom_string
module = importlib.import_module(module_str)
File "C:\Python37\lib\importlib_init_.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File ""OSError:`` Unable to load weights from pytorch checkpoint file for 'C:\Users\devbl r/.cache\huggingface\hub\models--pszemraj--flan-t5-large-grammar-synthesis\snaps hots\d45c90f835904f6c3fdf320e74fa6e894e960871\pytorch_model.bin' at 'C:\Users\de vblr/.cache\huggingface\hub\models--pszemraj--flan-t5-large-grammar-synthesis\sn apshots\d45c90f835904f6c3fdf320e74fa6e894e960871\pytorch_model.bin'. If you trie d to load a PyTorch model from a TF 2.0 checkpoint, please set
from_tf=True.```
You do not have enough CPU RAM to open the model checkpoint.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.