pytorch-summary
pytorch-summary copied to clipboard
How to pass input_size for 1d input?
Hi - thanks for the library. I'm finding it very useful so far.
The one issue I'm having is that I'm unsure how to pass input_size
for a 1d input. So if, for example, I want to run summary()
on a simple feed-forward network with 512 input features, how would this be done? So far I've tried input_size=(512)
, input_size=(1, 512)
, input_size=(1, 1, 512)
, input_size=(512, 1)
and input_size=(512, 1, 1)
all of which result in errors.
Am I missing something simple here? Or is the 1d use-case just not supported at this point?
I am having the same issue with a 1d input. It seems like I should use input_size=(1, 1625)
but I am getting the following error.
Traceback (most recent call last):
File "odenet_mosquito.py", line 448, in <module>
summary(model, input_size=(1, 1625))
File "/home/josh/.local/lib/python3.6/site-packages/torchsummary/torchsummary.py", line 72, in summary
model(*x)
File "/home/josh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/josh/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/josh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "odenet_mosquito.py", line 153, in forward
out = odeint(self.odefunc, x, self.integration_time, rtol=args.tol, atol=args.tol)
File "/home/josh/Documents/Graph_Networks/Deep_Learning/Neural_Ordinary_Differential_Equations/torchdiffeq/torchdiffeq/_impl/adjoint.py", line 129, in odeint_adjoint
ys = OdeintAdjointMethod.apply(*y0, func, t, flat_params, rtol, atol, method, options)
File "/home/josh/Documents/Graph_Networks/Deep_Learning/Neural_Ordinary_Differential_Equations/torchdiffeq/torchdiffeq/_impl/adjoint.py", line 18, in forward
ans = odeint(func, y0, t, rtol=rtol, atol=atol, method=method, options=options)
File "/home/josh/Documents/Graph_Networks/Deep_Learning/Neural_Ordinary_Differential_Equations/torchdiffeq/torchdiffeq/_impl/odeint.py", line 72, in odeint
solution = solver.integrate(t)
File "/home/josh/Documents/Graph_Networks/Deep_Learning/Neural_Ordinary_Differential_Equations/torchdiffeq/torchdiffeq/_impl/solvers.py", line 29, in integrate
self.before_integrate(t)
File "/home/josh/Documents/Graph_Networks/Deep_Learning/Neural_Ordinary_Differential_Equations/torchdiffeq/torchdiffeq/_impl/dopri5.py", line 78, in before_integrate
f0 = self.func(t[0].type_as(self.y0[0]), self.y0)
File "/home/josh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/josh/Documents/Graph_Networks/Deep_Learning/Neural_Ordinary_Differential_Equations/torchdiffeq/torchdiffeq/_impl/adjoint.py", line 122, in forward
return (self.base_func(t, y[0]),)
File "/home/josh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "odenet_mosquito.py", line 136, in forward
out = self.conv1(t, out)
File "/home/josh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
hook_result = hook(self, input, result)
File "/home/josh/.local/lib/python3.6/site-packages/torchsummary/torchsummary.py", line 20, in hook
summary[m_key]["input_shape"][0] = batch_size
IndexError: list assignment index out of range
I used summary(model, tuple([512]))
. and it worked.
Thanks that worked. For me the input was: summary(model, input_size=tuple([1, 1625]))
@jayleverett, does this command work for you?
summary(model, (1, 512))
, as the following code snippit works with the current version of torchsummary on github. The one at the front is to indicate that there is only one channel in the input.
import torch.nn as nn
from torchsummary import summary
model = nn.Sequential(
nn.Linear(512, 10),
)
summary(model, (1, 512), device="cpu")
I'm new to pytorch summary, what would be the input values (x,y) for a model with Conv1d(in_channels=19, out_channels=5, kernel_size=3, stride=1)
summary(model, input_size=tuple([x, y]))