gchanan
gchanan
have you tried a recent cudnn.torch? This commit likely fixed this issue: https://github.com/soumith/cudnn.torch/commit/7f3e2b22c50d12c8583f33ff792c88d692bcef49
CC @wenleix @VitalyFedyunin
is this fixed now? Also...should we move these issues to PyTorch github?
`foo_` treats the output as an input, whereas `foo_out` doesn't.
I mean it's not an input to the actual function being computed (e.g. add): Type.add_(x, y) is roughly (ignoring things like resizing and inplace): x.set_(x+y) while Type.add_out(z, x, y) is...
Could all methods on Type be const (I haven't checked)? One thing that is more painful than necessary is passing around Types (that you know exist, so don't want the...
what about generating an "undefined" TensorImpl type that just throws exceptions in every function call and the pImpl of current undefined Tensors get assigned to a (static) instance of the...
Here's a sketch of how this could work for a new native function foo with a Tensor arg self: 1) User writes the "foo_out" variant as a native function, e.g.:...
BTW, I'm not sure we want to implement this fully yet; Variables don't currently work with the `_out` variants (none of ATen-backed, C++ pure, python pure autograd functions), so we...
also 0-strided tensors don't work in inplace operations.