Nicolas Hug
Nicolas Hug
Thanks for opening the issue Philip. > Curiously, this xref seems to work the other way around. For example, the documentation for read_image correctly identifies that it is used in...
As discussed offline, we should just try to be consistent with the existing transforms here (even though the existing transforms themselves are not completely consistent...). - We already have the...
Thanks for the report @talcs . GaussianBlur will indeed use padding: https://github.com/pytorch/vision/blob/806dba678d5b01f6e8a46f7c48fdf8c09369a267/torchvision/transforms/v2/functional/_misc.py#L152 If you're interested, we'd be happy to review a PR from you to clarify that in the docstring
Thanks for your interest @Bhavay-2001 , but this has been taken care of in https://github.com/pytorch/vision/pull/8246 already.
No problem at all @GsnMithra thank you for the PR
Hi @mantasu , thanks for the feature request. Traditionally, the decoding utilities are kept separate from the transforms, as those tend to have fairly different parametrization. Is [`read_image`](https://pytorch.org/vision/main/generated/torchvision.io.read_image.html#torchvision.io.read_image ) what...
> But feel free to close it if it is better to keep those separate! Thanks for understanding @mantasu - yes, let's keep those separate. Our converstion transforms (e.g. ToImage,...
Thank you for the report @hassonofer. Just checking with @TeodorPoncu before moving forward: Teodor do you remember discussing this during reviews?
Thank you for your quick reply @TeodorPoncu ! Since the `momentum` parameter is only affecting the training of the model, and not inference (right?), we can probably fix the default...
Thanks @TeodorPoncu - yeah as far as I can tell from the Note in https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html, the formula is as you described