STARK
STARK copied to clipboard
RuntimeError on main basic example
This error you will get if you run basic sample from documentation Traceback:
`Traceback (most recent call last):
File "/home/seeker/tmp/./sovetnik.py", line 11, in
The original call is: File ".data/ts_code/code/torch/torch/nn/functional.py", line 489 _114 = [bsz, num_heads, src_len0, head_dim] v8 = torch.view(v6, _114) attn_output5 = torch.scaled_dot_product_attention(q3, k8, v8, attn_mask16, dropout_p0, is_causal) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _115 = torch.permute(attn_output5, [2, 0, 1, 3]) _116 = torch.contiguous(_115) 'multi_head_attention_forward' is being compiled since it was called from 'MultiheadAttention.forward' Serialized File ".data/ts_code/code/torch/torch/nn/modules/activation.py", line 44 _6 = "The fast path was not hit because {}" _7 = "MultiheadAttention does not support NestedTensor outside of its fast path. " _8 = torch.torch.nn.functional.multi_head_attention_forward ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _9 = uninitialized(Tuple[Tensor, Tensor]) _10 = uninitialized(Optional[Tensor]) `