ATen
ATen copied to clipboard
Network inference speed
Hi,
I have a neural network which expects input size as (batch_size, 1, 112, 112). If batch size is 1, it takes about 2ms for a forward pass. If I increase batch size to 20, it takes about 35ms which is much slower than the Pytorch counterpart which is ~4ms. Is it expected in ATen?