U-Net
I have implemented a generic, customizable U-Net that comes with options for:
- Number of input and output channels
in_channelsis the number of channels in the input image.out_channelsis the number of channels in the output image. - Upsampling
bilinear = False: Transposed convolution with a 2x2 kernel applied with stride 2. This is followed by a ReLU.bilinear = True: Factor 2 bilinear upsampling followed by convolution with a 1x1 kernel applied with stride 1.
- Padding
pad = True: The input size is retained in the output by zero-padding convolutions and, if necessary, the results of the upsampling operations.pad = False: The output is smaller than the input as in the original implementation. In this case, every 3x3 convolution layer reduces the height and width by 2 pixels each. Consequently, the right side of the U-Net has a smaller spatial size than the left size. Therefore, before concatenating, the central slice of the left tensor is cropped along the spatial dimensions to match those of the right tensor.
- Normalization following the ReLU which follows each convolution and transposed convolution.
normalization = None: Applies no normalization.normalization = "bn": Applies batch normalization.normalization = "ln": Applies layer normalization. A permutation of dimensions is performed before the layer to ensure normalization is applied over the channel dimension. Afterward, the dimensions are permuted back to their original order.
Deploy Preview for pytorch-hub-preview ready!
| Name | Link |
|---|---|
| Latest commit | 3a0849b0217fd1740ae854849a183460e7baf38a |
| Latest deploy log | https://app.netlify.com/projects/pytorch-hub-preview/deploys/6915d6b322e9a70008f635c7 |
| Deploy Preview | https://deploy-preview-359--pytorch-hub-preview.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify project configuration.
I have implemented a generic, customizable U-Net that comes with options for:
Number of input and output channels
in_channelsis the number of channels in the input image.out_channelsis the number of channels in the output image.Upsampling
bilinear = False: Transposed convolution with a 2x2 kernel applied with stride 2. This is followed by a ReLU.bilinear = True: Factor 2 bilinear upsampling followed by convolution with a 1x1 kernel applied with stride 1.Padding
pad = True: The input size is retained in the output by zero-padding convolutions and, if necessary, the results of the upsampling operations.pad = False: The output is smaller than the input as in the original implementation. In this case, every 3x3 convolution layer reduces the height and width by 2 pixels each. Consequently, the right side of the U-Net has a smaller spatial size than the left size. Therefore, before concatenating, the central slice of the left tensor is cropped along the spatial dimensions to match those of the right tensor.Normalization following the ReLU which follows each convolution and transposed convolution.
normalization = None: Applies no normalization.normalization = "bn": Applies batch normalization.normalization = "ln": Applies layer normalization. A permutation of dimensions is performed before the layer to ensure normalization is applied over the channel dimension. Afterward, the dimensions are permuted back to their original order.
I have now also added an option for controlling the depth of the U-Net:
5. Depth
depth is the The depth of the U-Net.
This is the number of steps in the encoder and decoder paths.
This is one less than the number of downsampling and upsampling blocks.
The number of intermediate channels is 64*2**depth, i.e. [64, 128, 256, 512, 1024] for depth = 5.