环境问题
我在window环境下启动inference,找不到问题在哪儿,该安装的包都已经安装了【痛苦面具】
(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> python inference.py --jsonl examples/examples.jsonl --output_dir outputs --seed 42 --use_normalize
Traceback (most recent call last):
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2154, in getattr
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2184, in _get_module
raise e
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2182, in get_module
return importlib.import_module("." + module_name, self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\importlib_init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\AI\Projects\Git\AIAudio\MOSS-TTSD\inference.py", line 8, in
可以看一下 flash_attn 和 transformers 的版本号吗?
这个问题看起来很奇怪。
可以看一下 和 的版本号吗? 这个问题看起来很奇怪。
flash_attn``transformers
以下是我的环境,最开始transformers 4.53.1,报错,降了版本4.30.0也还是报错
(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn Name: transformers Version: 4.30.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by:
Name: flash_attn Version: 2.7.4.post1 Summary: Flash Attention: Fast and Memory-Efficient Exact Attention Home-page: https://github.com/Dao-AILab/flash-attention Author: Tri Dao Author-email: [email protected] License: Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: einops, torch Required-by:
可以看一下 和 的版本号吗? 这个问题看起来很奇怪。
flash_attntransformers ``以下是我的环境,最开始transformers 4.53.1,报错,降了版本4.30.0也还是报错
(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn
Name: transformers Version: 4.30.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by:
Name: flash_attn
Version: 2.7.4.post1 Summary: Flash Attention: Fast and Memory-Efficient Exact Attention Home-page: https://github.com/Dao-AILab/flash-attention Author: Tri Dao Author-email: [email protected] License: Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: einops, torch Required-by:
一样的问题,试了很多transformer都不行,各种报错,都是transformer问题
可以看一下 和 的版本号吗? 这个问题看起来很奇怪。
flash_attntransformers ``以下是我的环境,最开始transformers 4.53.1,报错,降了版本4.30.0也还是报错
(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn
Name: transformers Version: 4.30.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by:
Name: flash_attn
Version: 2.7.4.post1 Summary: Flash Attention: Fast and Memory-Efficient Exact Attention Home-page: https://github.com/Dao-AILab/flash-attention Author: Tri Dao Author-email: [email protected] License: Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: einops, torch Required-by:
请问在4.53.1的版本报什么错呢
可以看一下 和 的版本号吗?这个问题看起来很奇怪。变形金刚 ''
flash_attn以下是我的环境,最开始transformers 4.53.1,报错,降了版本4.30.0也还是报错
(moss_ttsd)PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn
名称:transformers 版本:4.30.0 摘要:适用于 JAX、PyTorch 和 TensorFlow 的最新机器学习 主页:https://github.com/huggingface/transformers 作者:在我们所有贡献者的帮助下(https://github.com/huggingface/transformers/graphs/contributors)的 The Hugging Face 团队(过去和未来) 作者电子邮件:[email protected] 许可证:Apache 2.0 许可证位置: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages 需要: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm 需要:
名称:flash_attn
版本: 2.7.4.post1 摘要: Flash 注意: 快速且节省内存 确切的注意 主页: https://github.com/Dao-AILab/flash-attention 作者: Tri Dao 作者-电子邮件: [email protected] 许可证: 位置: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages 需要: einops, torch 需要-by:
请问在4.53.1的版本报什么错呢
(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn Name: transformers Version: 4.53.1 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by:
Name: flash_attn
Version: 2.7.4.post1
Summary: Flash Attention: Fast and Memory-Efficient Exact Attention
Home-page: https://github.com/Dao-AILab/flash-attention
Author: Tri Dao
Author-email: [email protected]
License:
Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages
Requires: einops, torch
Required-by:
(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> python inference.py --jsonl examples/examples.jsonl --output_dir outputs --seed 42 --use_normalize
Traceback (most recent call last):
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2154, in getattr
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2184, in _get_module
raise e
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2182, in get_module
return importlib.import_module("." + module_name, self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\importlib_init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "
File "
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\generation\utils.py", line 49, in
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch\nn\attention\flex_attention.py", line 15, in
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2154, in getattr
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2184, in _get_module
raise e
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2182, in get_module
return importlib.import_module("." + module_name, self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\importlib_init.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "
File "
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 38, in
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\AI\Projects\Git\AIAudio\MOSS-TTSD\inference.py", line 8, in
from generation_utils import load_model, process_batch
File "D:\AI\Projects\Git\AIAudio\MOSS-TTSD\generation_utils.py", line 8, in