text-generation-webui
text-generation-webui copied to clipboard
v1.14: module 'torch.library' has no attribute 'register_fake'
Describe the bug
running latest build results in torch error
python server.py --api --listen --n-gpu-layers 32 --threads 8 --numa --tensorcores --trust-remote-code
...
RuntimeError: Failed to import transformers.models.auto.processing_auto because of the following error
(look up to see its traceback):
module 'torch.library' has no attribute 'register_fake'
this url says this error is a result of torch vision? https://github.com/lllyasviel/IC-Light/issues/77
Is there an existing issue for this?
- [X] I have searched the existing issues
Reproduction
pip install -r requirements.txt
python server.py --api --listen --n-gpu-layers 32 --threads 8 --numa --tensorcores --trust-remote-code
Screenshot
No response
Logs
(textgen) [root@pve-m7330 text-generation-webui]# !903
python server.py --api --listen --n-gpu-layers 32 --threads 8 --numa --tensorcores --trust-remote-code
╭──────────────────────────────── Traceback (most recent call last) ─────────────────────────────────╮
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/utils/import_utils.py │
│ :1603 in _get_module │
│ │
│ 1602 try: │
│ ❱ 1603 return importlib.import_module("." + module_name, self.__name__) │
│ 1604 except Exception as e: │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/importlib/__init__.py:126 in import_module │
│ │
│ 125 level += 1 │
│ ❱ 126 return _bootstrap._gcd_import(name[level:], package, level) │
│ 127 │
│ in _gcd_import:1050 │
│ in _find_and_load:1027 │
│ in _find_and_load_unlocked:1006 │
│ │
│ ... 4 frames hidden ... │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/image_processing_util │
│ s.py:21 in <module> │
│ │
│ 20 from .image_processing_base import BatchFeature, ImageProcessingMixin │
│ ❱ 21 from .image_transforms import center_crop, normalize, rescale │
│ 22 from .image_utils import ChannelDimension │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/image_transforms.py:2 │
│ 2 in <module> │
│ │
│ 21 │
│ ❱ 22 from .image_utils import ( │
│ 23 ChannelDimension, │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/image_utils.py:58 in │
│ <module> │
│ │
│ 57 if is_torchvision_available(): │
│ ❱ 58 from torchvision.transforms import InterpolationMode │
│ 59 │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/torchvision/__init__.py:10 in │
│ <module> │
│ │
│ 9 from .extension import _HAS_OPS # usort:skip │
│ ❱ 10 from torchvision import _meta_registrations, datasets, io, models, ops, transforms, util │
│ 11 │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/torchvision/_meta_registrations.py │
│ :163 in <module> │
│ │
│ 162 │
│ ❱ 163 @torch.library.register_fake("torchvision::nms") │
│ 164 def meta_nms(dets, scores, iou_threshold): │
╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: module 'torch.library' has no attribute 'register_fake'
The above exception was the direct cause of the following exception:
╭──────────────────────────────── Traceback (most recent call last) ─────────────────────────────────╮
│ /home/user/text-generation-webui/server.py:40 in <module> │
│ │
│ 39 import modules.extensions as extensions_module │
│ ❱ 40 from modules import ( │
│ 41 chat, │
│ │
│ /home/user/text-generation-webui/modules/training.py:21 in <module> │
│ │
│ 20 from datasets import Dataset, load_dataset │
│ ❱ 21 from peft import ( │
│ 22 LoraConfig, │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/peft/__init__.py:22 in <module> │
│ │
│ 21 │
│ ❱ 22 from .auto import ( │
│ 23 AutoPeftModel, │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/peft/auto.py:32 in <module> │
│ │
│ 31 from .config import PeftConfig │
│ ❱ 32 from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING │
│ 33 from .peft_model import ( │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/peft/mapping.py:22 in <module> │
│ │
│ 21 │
│ ❱ 22 from peft.tuners.xlora.model import XLoraModel │
│ 23 │
│ │
│ ... 7 frames hidden ... │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/awq/models/base.py:35 in <module> │
│ │
│ 34 from awq.utils.utils import get_best_device, qbits_available │
│ ❱ 35 from transformers import ( │
│ 36 AutoConfig, │
│ in _handle_fromlist:1075 │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/utils/import_utils.py │
│ :1594 in __getattr__ │
│ │
│ 1593 module = self._get_module(self._class_to_module[name]) │
│ ❱ 1594 value = getattr(module, name) │
│ 1595 else: │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/utils/import_utils.py │
│ :1593 in __getattr__ │
│ │
│ 1592 elif name in self._class_to_module.keys(): │
│ ❱ 1593 module = self._get_module(self._class_to_module[name]) │
│ 1594 value = getattr(module, name) │
│ │
│ /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/utils/import_utils.py │
│ :1605 in _get_module │
│ │
│ 1604 except Exception as e: │
│ ❱ 1605 raise RuntimeError( │
│ 1606 f"Failed to import {self.__name__}.{module_name} because of the followin │
╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Failed to import transformers.models.auto.processing_auto because of the following error
(look up to see its traceback):
module 'torch.library' has no attribute 'register_fake'
### System Info
```shell
python 3.10
rocky linux 9
p5200 (compute 60)
confirm v1.13 works
turns out to be a package issue
I installed the reqs from v1.13 then switched back to v1.14
installed torch for 12.1 and ran just fine