why not use built-in "torch.nn.functional.conv2d" for ska?
Many thanks for your excellent work and for sharing it with the community.
I noticed that the SKA module includes custom forward and backward functions. Please correct me if I’m mistaken, but SKA seems to be essentially a convolution with dynamic kernel weights. I’m curious—what’s the reason for implementing it manually instead of using PyTorch’s built-in Conv2d? Using the built-in function could potentially simplify deployment.
The same question as above. The SKA module will have problems when converted to the TensorRT format, which is not compatible enough and not conducive to deployment.
I guess I found the reason. That is because SKA is not a standard conv. it takes in kernel weights in shape (B, C, K**2, H, W), normal conv kernel is like (in, out, K, K).
Hey, guys, have you successfully completed the deployment testing? I'm encountering a compilation failure with the MMVC component, which is really frustrating. I'd like to ask how you handled it.