Can FlashFFTConv be used for Conv2d on PyTorch?
Not yet, but it's something we're very interested in looking into soon.
To help us target better - what models do you want to use it for? Are they depthwise or grouped convolutions, and what are the filter sizes?
Just normal Conv3x3-conv and I noticed the paper didn't say anything about really small filter size. I plan to use it in U-Net Diffusion
Not yet, but it's something we're very interested in looking into soon.
To help us target better - what models do you want to use it for? Are they depthwise or grouped convolutions, and what are the filter sizes?
Hello, will normal conv3x3 and (conv3x3 where group size = 4) be compatible with FlashFFTConv?
The 3x3 wouldn't be a great use case right now - we specialize for really long filters (size on the order of the input image).
The 3x3 wouldn't be a great use case right now - we specialize for really long filters (size on the order of the input image).
There is not any Conv2d application? So, I must encode the data to tokens like attention if I want to use it on CV?
That's correct, we don't have a Conv2D implemented right now.