burn icon indicating copy to clipboard operation
burn copied to clipboard

MaxPool2d not outputting proper shape on burn-ndarray and burn-wgpu backends

Open bayedieng opened this issue 2 years ago • 9 comments

Describe the bug

Something seems to be wrong with the implementation of maxpool2d's backends. When testing against torch tensors I am getting different shapes. The burn backends aren't outputting the right shapes

To Reproduce

Steps to reproduce the behavior:

  1. Create a tensor and forward apply maxpool2d to it and test against pytorch

Desktop (please complete the following information):

  • Macbook m2
  • MacOS Ventura

bayedieng avatar Aug 19 '23 11:08 bayedieng

@bayedieng , if you have some example handy for PyTorch, could you please share it? It would quicken our troubleshooting.

antimora avatar Aug 19 '23 14:08 antimora

Sure, I created a torch tensor of ones with the shape [1, 32, 32, 3] and applied maxpool2d with a kernel size of (2,2) and got the correct output shape of [1, 32, 16, 1]. When applying maxpool to same tensor using both burn-ndarray and burn-wgpu I got the incorrect output shape of [1, 32, 31, 2]. Note that I've tried on version 0.8 and the latest git commit with the same result.

bayedieng avatar Aug 19 '23 14:08 bayedieng

Sure, I created a torch tensor of ones with the shape [1, 32, 32, 3] and applied maxpool2d with a kernel size of (2,2), obtaining the correct output shape of [1, 32, 16, 1]. However, when applying maxpool to the same tensor using both burn-ndarray and burn-wgpu, I got the incorrect output shape of [1, 32, 31, 2]. Note that I've tested this on version 0.8 and the latest git commit, with the same result.

The difference lies in the fact that Burn defaults strides to 1, whereas PyTorch defaults to the kernel size. If you correct for this difference, the shapes should align. Here is a test code:

use burn::nn::pool::MaxPool2d;
use burn::nn::pool::MaxPool2dConfig;

let config: MaxPool2dConfig = MaxPool2dConfig::new([2, 2]).with_strides([2, 2]);

let pool: MaxPool2d = config.init();

let x1: Tensor<B,4> = Tensor::ones([1, 32, 32, 3]);

let x2 = pool.forward(x1);

println!("x2 shape = {:?}", x2.shape());
println!("Config {:?}", config);

Output:

x2 shape = Shape { dims: [1, 32, 16, 1] }
Config MaxPool2dConfig { kernel_size: [2, 2], strides: [2, 2], padding: Valid, ceil_mode: false }

Also, something to be aware of is that Burn's ceil_mode implicitly defaults to false (the same as PyTorch's default). Changing this to true will alter the shape, but it should not affect your example. Below is the corresponding Python code for others to learn from:

import torch
import torch.nn as nn

m_false = nn.MaxPool2d((2, 2), ceil_mode=False)
m_true = nn.MaxPool2d((2, 2), ceil_mode=True)

ones = torch.ones(1, 32, 32, 3)
print("Input shape", ones.shape)

print("Output shape when ceil_mode is false", m_false(ones).shape)
print("Output shape when ceil_mode is true", m_true(ones).shape)

Output:

Input shape torch.Size([1, 32, 32, 3])
Output shape when ceil_mode is false torch.Size([1, 32, 16, 1])
Output shape when ceil_mode is true torch.Size([1, 32, 16, 2])

Let us know if this resolves your issue.

antimora avatar Aug 20 '23 00:08 antimora

It does, thank you.

bayedieng avatar Aug 20 '23 08:08 bayedieng

@bayedieng @antimora Not sure if we should set the default pooling strides to 2. This error is likely to happen a lot, and it may be a good idea to have the same defaults as PyTorch, at least for common modules.

nathanielsimard avatar Aug 20 '23 23:08 nathanielsimard

@nathanielsimard yeah probably good idea. And should make this change before a release since max pool is new.

antimora avatar Aug 20 '23 23:08 antimora

Re-openning it to make the strides defaults to be equal to kernel

antimora avatar Aug 21 '23 15:08 antimora

Hey, @antimora should I take this issue?

0x-chaitu avatar Jul 22 '24 15:07 0x-chaitu

Hey, @antimora should I take this issue?

Yes. Please go ahead.

antimora avatar Jul 22 '24 15:07 antimora