Tone
Tone
```python import torch from spatial_correlation_sampler import SpatialCorrelationSampler # define a correlation module correlation_sampler = SpatialCorrelationSampler(1, 9, 1, 0, 1) output = correlation_sampler(input1, input2) # reshape output to be a 3D...
```python import torch from spatial_correlation_sampler import SpatialCorrelationSampler input1 = torch.randn(2, 32, 48, 64).cuda() input2 = torch.randn(2, 32, 48, 64).cuda() # define a correlation module correlation_sampler = SpatialCorrelationSampler(1, 9, 1, 0,...
Currently, I have not deployed my model on Android devices. My experience is that correlation operation can only approach real-time when acclerated by parallel computing devices, such as GPU or...
Of course we can use. The Android device with mobile gpu is ideal, and is the trend. The deployment takes some engineering work.
You can load provided pre-trained weights in PyTorch and convert them to the TFLite or ONNX for deployment.
I have not tried using TFLite. According to my experience, correlation and warping are not standard operations integrated in existing packages. And you should create custom layers to deploy it,...
Oh, I have not consider pruning yet, that may be a future work.
You can try the [Pytorch Correlation module](https://github.com/ClementPinard/Pytorch-Correlation-extension), which supports cuda 10 and newer versions of PyTorch, such as 1.2 and 1.6. Refer to the issue [Installing Correlation package](https://github.com/ltkong218/FastFlowNet/issues/2) for more...
This seems to be some environment conflicts and compilation error, you should seek for the solutions on the Internet case by case since I did not meet such errors before.
I have added the file ./models/FastFlowNet_.py that supports higher versions of CUDA and PyTorch. Please see the updated [README.md](https://github.com/ltkong218/FastFlowNet#readme).