Specifying the output size in transforms.GaussianBlur
📚 The doc issue
The transforms.GaussianBlur constructor takes kernel_size as a parameter. However, it isn't specified whether the convolution will use padding to keep the output image in the same dimension as the input. I think it should be mentioned. Maintaining the original input size is crucial in many cases, like object detection or segmentation, where the GT refers to certain pixels.
Suggest a potential alternative/fix
No response
Thanks for the report @talcs . GaussianBlur will indeed use padding:
https://github.com/pytorch/vision/blob/806dba678d5b01f6e8a46f7c48fdf8c09369a267/torchvision/transforms/v2/functional/_misc.py#L152
If you're interested, we'd be happy to review a PR from you to clarify that in the docstring
Hi @NicolasHug, I would be happy to work on this issue. Is it open for beginners? Thanks
Thanks for your interest @Bhavay-2001 , but this has been taken care of in https://github.com/pytorch/vision/pull/8246 already.
Hi @NicolasHug, is there any other issue that I can contribute to? Thanks