STARK icon indicating copy to clipboard operation
STARK copied to clipboard

RuntimeError on main basic example

Open kjMaru opened this issue 9 months ago • 3 comments

This error you will get if you run basic sample from documentation Traceback:

`Traceback (most recent call last): File "/home/seeker/tmp/./sovetnik.py", line 11, in synthesizer = SileroSpeechSynthesizer(model_url='https://models.silero.ai/models/tts/ru/v4_ru.pt') File "/home/seeker/.local/lib/python3.10/site-packages/stark/interfaces/silero.py", line 37, in init self.model = torch.package.PackageImporter(local_file).load_pickle('tts_models', 'model') File "/home/seeker/.local/lib/python3.10/site-packages/torch/package/package_importer.py", line 271, in load_pickle result = unpickler.load() File "/usr/lib/python3.10/pickle.py", line 1213, in load dispatchkey[0] File "/usr/lib/python3.10/pickle.py", line 1254, in load_binpersid self.append(self.persistent_load(pid)) File "/home/seeker/.local/lib/python3.10/site-packages/torch/package/package_importer.py", line 249, in persistent_load loaded_reduces[reduce_id] = func(self, *args) File "/home/seeker/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 372, in unpackage_script_module cpp_module = torch._C._import_ir_module_from_package( RuntimeError: Unknown builtin op: aten::scaled_dot_product_attention. Here are some suggestions: aten::_scaled_dot_product_attention

The original call is: File ".data/ts_code/code/torch/torch/nn/functional.py", line 489 _114 = [bsz, num_heads, src_len0, head_dim] v8 = torch.view(v6, _114) attn_output5 = torch.scaled_dot_product_attention(q3, k8, v8, attn_mask16, dropout_p0, is_causal) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _115 = torch.permute(attn_output5, [2, 0, 1, 3]) _116 = torch.contiguous(_115) 'multi_head_attention_forward' is being compiled since it was called from 'MultiheadAttention.forward' Serialized File ".data/ts_code/code/torch/torch/nn/modules/activation.py", line 44 _6 = "The fast path was not hit because {}" _7 = "MultiheadAttention does not support NestedTensor outside of its fast path. " _8 = torch.torch.nn.functional.multi_head_attention_forward ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _9 = uninitialized(Tuple[Tensor, Tensor]) _10 = uninitialized(Optional[Tensor]) `

kjMaru avatar Oct 01 '23 18:10 kjMaru