MOSS-TTSD icon indicating copy to clipboard operation
MOSS-TTSD copied to clipboard

环境问题

Open zirenlegend opened this issue 5 months ago • 5 comments

我在window环境下启动inference,找不到问题在哪儿,该安装的包都已经安装了【痛苦面具】

(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> python inference.py --jsonl examples/examples.jsonl --output_dir outputs --seed 42 --use_normalize Traceback (most recent call last): File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2154, in getattr module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2184, in _get_module raise e File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2182, in get_module return importlib.import_module("." + module_name, self.name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\importlib_init.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1387, in _gcd_import File "", line 1360, in _find_and_load File "", line 1331, in _find_and_load_unlocked File "", line 935, in _load_unlocked File "", line 999, in exec_module File "", line 488, in _call_with_frames_removed File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\modeling_utils.py", line 59, in from .integrations.flash_attention import flash_attention_forward File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\integrations\flash_attention.py", line 5, in from ..modeling_flash_attention_utils import _flash_attention_forward, flash_attn_supports_top_left_mask File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\modeling_flash_attention_utils.py", line 104, in from flash_attn.layers.rotary import apply_rotary_emb File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\flash_attn\layers\rotary.py", line 8, in from flash_attn.ops.triton.rotary import apply_rotary File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\flash_attn\ops\triton\rotary.py", line 7, in import triton ModuleNotFoundError: No module named 'triton'

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "D:\AI\Projects\Git\AIAudio\MOSS-TTSD\inference.py", line 8, in from generation_utils import load_model, process_batch File "D:\AI\Projects\Git\AIAudio\MOSS-TTSD\generation_utils.py", line 9, in from modeling_asteroid import AsteroidTTSInstruct File "D:\AI\Projects\Git\AIAudio\MOSS-TTSD\modeling_asteroid.py", line 12, in from transformers import PreTrainedModel, GenerationMixin, Qwen3Config, Qwen3Model File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2157, in getattr raise ModuleNotFoundError( ModuleNotFoundError: Could not import module 'PreTrainedModel'. Are this object's requirements defined correctly?

zirenlegend avatar Jul 06 '25 13:07 zirenlegend

可以看一下 flash_attntransformers 的版本号吗? 这个问题看起来很奇怪。

Twilight92z avatar Jul 07 '25 06:07 Twilight92z

可以看一下 和 的版本号吗? 这个问题看起来很奇怪。flash_attn``transformers

以下是我的环境,最开始transformers 4.53.1,报错,降了版本4.30.0也还是报错

(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn Name: transformers Version: 4.30.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by:

Name: flash_attn Version: 2.7.4.post1 Summary: Flash Attention: Fast and Memory-Efficient Exact Attention Home-page: https://github.com/Dao-AILab/flash-attention Author: Tri Dao Author-email: [email protected] License: Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: einops, torch Required-by:

zirenlegend avatar Jul 08 '25 07:07 zirenlegend

可以看一下 和 的版本号吗? 这个问题看起来很奇怪。 flash_attntransformers ``

以下是我的环境,最开始transformers 4.53.1,报错,降了版本4.30.0也还是报错

(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn

Name: transformers Version: 4.30.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by:

Name: flash_attn

Version: 2.7.4.post1 Summary: Flash Attention: Fast and Memory-Efficient Exact Attention Home-page: https://github.com/Dao-AILab/flash-attention Author: Tri Dao Author-email: [email protected] License: Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: einops, torch Required-by:

一样的问题,试了很多transformer都不行,各种报错,都是transformer问题

Jandown avatar Jul 08 '25 08:07 Jandown

可以看一下 和 的版本号吗? 这个问题看起来很奇怪。 flash_attntransformers ``

以下是我的环境,最开始transformers 4.53.1,报错,降了版本4.30.0也还是报错

(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn

Name: transformers Version: 4.30.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by:

Name: flash_attn

Version: 2.7.4.post1 Summary: Flash Attention: Fast and Memory-Efficient Exact Attention Home-page: https://github.com/Dao-AILab/flash-attention Author: Tri Dao Author-email: [email protected] License: Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: einops, torch Required-by:

请问在4.53.1的版本报什么错呢

xiami2019 avatar Jul 08 '25 08:07 xiami2019

可以看一下 和 的版本号吗?这个问题看起来很奇怪。变形金刚 '' flash_attn

以下是我的环境,最开始transformers 4.53.1,报错,降了版本4.30.0也还是报错

(moss_ttsd)PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn

名称:transformers 版本:4.30.0 摘要:适用于 JAX、PyTorch 和 TensorFlow 的最新机器学习 主页:https://github.com/huggingface/transformers 作者:在我们所有贡献者的帮助下(https://github.com/huggingface/transformers/graphs/contributors)的 The Hugging Face 团队(过去和未来) 作者电子邮件:[email protected] 许可证:Apache 2.0 许可证位置: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages 需要: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm 需要:

名称:flash_attn

版本: 2.7.4.post1 摘要: Flash 注意: 快速且节省内存 确切的注意 主页: https://github.com/Dao-AILab/flash-attention 作者: Tri Dao 作者-电子邮件: [email protected] 许可证: 位置: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages 需要: einops, torch 需要-by:

请问在4.53.1的版本报什么错呢

(moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> pip show transformers flash-attn Name: transformers Version: 4.53.1 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by:

Name: flash_attn Version: 2.7.4.post1 Summary: Flash Attention: Fast and Memory-Efficient Exact Attention Home-page: https://github.com/Dao-AILab/flash-attention Author: Tri Dao Author-email: [email protected] License: Location: E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages Requires: einops, torch Required-by: (moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD> python inference.py --jsonl examples/examples.jsonl --output_dir outputs --seed 42 --use_normalize Traceback (most recent call last): File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2154, in getattr module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2184, in _get_module raise e File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2182, in get_module return importlib.import_module("." + module_name, self.name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\importlib_init.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1387, in _gcd_import File "", line 1360, in _find_and_load File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked File "", line 999, in exec_module File "", line 488, in _call_with_frames_removed
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\generation\utils.py", line 49, in from ..masking_utils import create_masks_for_generate File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\masking_utils.py", line 28, in from torch.nn.attention.flex_attention import BlockMask, create_block_mask
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch\nn\attention\flex_attention.py", line 15, in from torch.dynamo.trace_wrapped_higher_order_op import TransformGetItemToIndex File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_dynamo_init.py", line 53, in from .polyfills import loader as _ # usort: skip # noqa: F401 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_dynamo\polyfills\loader.py", line 25, in POLYFILLED_MODULES: tuple["ModuleType", ...] = tuple( ^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_dynamo\polyfills\loader.py", line 26, in importlib.import_module(f".{submodule}", package=polyfills.name) File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\importlib_init.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_dynamo\polyfills\builtins.py", line 30, in @substitute_in_graph(builtins.all, can_constant_fold_through=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_dynamo\decorators.py", line 427, in wrapper rule_map: dict[Any, type[VariableTracker]] = get_torch_obj_rule_map() ^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_dynamo\trace_rules.py", line 2870, in get_torch_obj_rule_map obj = load_object(k) ^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_dynamo\trace_rules.py", line 2901, in load_object val = _load_obj_from_str(x[0]) ^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_dynamo\trace_rules.py", line 2885, in load_obj_from_str return getattr(importlib.import_module(module), obj_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\importlib_init.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_higher_order_ops\map.py", line 6, in from torch._functorch.aot_autograd import AOTConfig, create_joint File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_functorch\aot_autograd.py", line 26, in from torch.inductor.output_code import OutputCode File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_inductor\output_code.py", line 52, in from .runtime.autotune_cache import AutotuneCacheBundler File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_inductor\runtime\autotune_cache.py", line 23, in from .triton_compat import Config File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\torch_inductor\runtime\triton_compat.py", line 9, in import triton File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\triton_init.py", line 1, in raise RuntimeError("Should never be installed") RuntimeError: Should never be installed

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2154, in getattr module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2184, in _get_module raise e File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2182, in get_module return importlib.import_module("." + module_name, self.name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\importlib_init.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1387, in _gcd_import File "", line 1360, in _find_and_load File "", line 1331, in _find_and_load_unlocked
File "", line 935, in _load_unlocked File "", line 999, in exec_module File "", line 488, in _call_with_frames_removed
File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 38, in from .auto_factory import _LazyAutoMapping File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\models\auto\auto_factory.py", line 43, in from ...generation import GenerationMixin File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2157, in getattr raise ModuleNotFoundError( ModuleNotFoundError: Could not import module 'GenerationMixin'. Are this object's requirements defined correctly?

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "D:\AI\Projects\Git\AIAudio\MOSS-TTSD\inference.py", line 8, in
from generation_utils import load_model, process_batch File "D:\AI\Projects\Git\AIAudio\MOSS-TTSD\generation_utils.py", line 8, in from transformers import AutoTokenizer File "E:\ProgramData\anaconda3\envs\moss_ttsd\Lib\site-packages\transformers\utils\import_utils.py", line 2157, in getattr raise ModuleNotFoundError( ModuleNotFoundError: Could not import module 'AutoTokenizer'. Are this object's requirements defined correctly? (moss_ttsd) PS D:\AI\Projects\Git\AIAudio\MOSS-TTSD>

zirenlegend avatar Jul 09 '25 07:07 zirenlegend