MONAI icon indicating copy to clipboard operation
MONAI copied to clipboard

Additional features for the BilateralFilter layer

Open faebstn96 opened this issue 3 years ago • 10 comments

Is your feature request related to a problem? Please describe. For using the BilateralFilter layer (by @charliebudd ) more flexibly, I would really appreciate two extensions to the current filter implementation.

Describe the solution you'd like

  1. It would be nice to implement the backpropagation of a loss into the filter parameters (spatial and range sigma) to be able to learn them directly.
  2. In addition, independent spatial filter kernel sizes (sigma) for different spatial dimensions would be very interesting to have (Gaussian filter kernel can have a different size in different dimensions).

faebstn96 avatar Mar 24 '21 11:03 faebstn96

Just to add to this, the PHLFilter may be useful to you as it can have varying spatial sigmas. To construct a gaussian filter just provide the spatial coordinates of each pixel as the feature tensor. For a billateral filter provide the spatial and color coordinates.

charliebudd avatar Mar 24 '21 13:03 charliebudd

Hi @charliebudd, I implemented a fully differentiable bilateral filter layer (with respect to its input as well as to its spatial/range sigmas) based on an analytical derivative of the filter algorithm (both C++/cpu and Cuda implementation, yet only tested on Ubuntu): https://github.com/faebstn96/trainable-bilateral-filter-source You can find its detailed mathematical description together with some experiments in our arXiv preprint: https://arxiv.org/pdf/2201.10345.pdf As the implementation is based on the forward pass of the current MONAI bilateral filter implementation, an integration into MONAI should be fairly straight forward. I also included a gradcheck.py script, which confirms that the analytical gradient matches the numerical derivative of the forward pass using Pytorch's torch.autograd.gradcheck function. A unit test could be created from that script. Two simple examples are also included in the repository. Of course, the gradient updates of the kernel widths can be deactivated by simply removing the filter parameters from the Pytorch graph and the filter can be used as a standard differentiable bilateral filter with hyperparameters. I would be happy to contribute to the MONAI library with my implementation.

One more comment regarding the current bilateral filter layer implementation in the MONAI repo: Please note that the current backward pass of the bilateral filter (gradient with respect to the filter input) in the MONAI library does not calculate the correct derivative of the forward pass. In the current implementation, the backward pass is calculated with the same function as the forward pass (https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/layers/filtering.py). This may hold for a standard convolutional layer, but is not correct for the bilateral filter as the input image is additionally used as input to the range kernel, which complicates the derivative (see our arXiv paper). A correct analytical derivative is implemented in my layer and validated by the gradcheck test.

Please let me know what you think and if the layer is worth integrating into MONAI. I am happy to answer any questions or help if you have problems running the code from my repo.

faebstn96 avatar Mar 16 '22 10:03 faebstn96

This looks great! And good spot with the current monai filter, It should indeed be saving the input image for guiding the bilateral filter for the backwards step. I think it makes sense to put together a PR to update to use your analytical derivative.

charliebudd avatar Mar 16 '22 13:03 charliebudd

Great! Let me know if/how I can help you with the PR.

Maybe one other comment here: At the moment the bilateral filter layer does not support channel dimensions >1. Currently, this will run into an assert. When implementing this, one could also allow defining the number of output channels like in conventional convolutional layers. However, this might be an enhancement for later. I did not find the time yet to properly implement that. The batch dimension is, however, already handled correctly.

faebstn96 avatar Mar 16 '22 14:03 faebstn96

Yes I noted that, we would also need to handle an input with 2 spatial dimensions but I dont see any particular problems here. I'm happy to implement this.

charliebudd avatar Mar 16 '22 14:03 charliebudd

Yes, I guess you could handle two spatial dimensions the same way as it is done in the current MONAI bilateral filter implementation. Thank you so much, let me know when you need help!

faebstn96 avatar Mar 16 '22 14:03 faebstn96

Hi @charliebudd, Did you already have progress on the implementation of the bilateral filter layer? Please let me know if there are problems or you need help!

faebstn96 avatar May 17 '22 10:05 faebstn96

I have not had the capacity to work on this as of yet.

charliebudd avatar May 17 '22 10:05 charliebudd

Sure, no worries - just let me know when you need help!

faebstn96 avatar May 17 '22 10:05 faebstn96

To add here - Our arXiv paper is published open-access at Medical Physics (https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.15718). In principle the bilateral filter layer should be eligible to be added to the Project-MONAI/research-contributions, right? Should we do that @charliebudd?

faebstn96 avatar Jul 08 '22 12:07 faebstn96

Hi @charliebudd, any updates on this? Best Fabi

faebstn96 avatar Nov 29 '22 09:11 faebstn96

Thanks @faebstn96, have you compared the backward pass with a naive PyTorch implementation, would you like to join one of our weekly dev meetings to discuss this? let me send you an email.

wyli avatar Nov 29 '22 10:11 wyli

Thanks for your invitation, I would be happy to join! For development purpose I implemented a naive forward pass in PyTorch and tried to make the filter parameters trainable via autograd. As expected the performance was extremely poor in terms of speed and memory requirement (although the implementation was not really optimized). I did not directly compare the PyTorch implementation with the custom analytical gradient-based layer though. To be sure that the gradient is correct my GitHub repository contains a gradcheck.py script that compares the implemented analytical with a numerical gradient using the torch.autograd.gradcheck function.

I will respond via email as well, thanks!

faebstn96 avatar Nov 29 '22 11:11 faebstn96