ATen icon indicating copy to clipboard operation
ATen copied to clipboard

ATen: A TENsor library for C++11

Results 68 ATen issues
Sort by recently updated
recently updated
newest added

We need a way to efficiently represent a tensor with a concrete type and shape, but which is filled with all zeros, without having to actually materialize the tensor in...

I spent some time scratching my head on why there was both `prelu` and `prelu_forward` when they looked exactly identical. The answer is, based on reading nn_parse.py, that prelu doesn't...

`max_pool2d` returns an `output` tensor and an `indices` tensor; only the `output` tensor is differentiable. `prelu_backward`, on the other hand, returns a `grad_input` and `grad_weight`; both are differentiable. We need...

Both PyTorch and ATen (standalone) produce libATen.so files. This is hazardous because if they are not ABI compatible (and they probably are not), you will get extremely hard to diagnose...

PyTorch CUDA tensor constructors have an undocumented keyword argument `device` which allows you to specify what GPU device the tensor should be allocated on. Looking at `Type` in ATen (the...

Numpy allows dimensional operations on scalars, but requires that the dimension passed in is None, 0 (or -1, which is equivalent with wrap_dim): ``` >>> np.array(5).sum(None) 5 >>> np.array(5).sum(0) 5...

This currently fails, but should pass: ``` Type & T = CPU(kFloat); auto t = T.ones({0}); // FIXME: you should be able to reduce over size {0} try { t.sum(0);...

Current we have two overloads for things where 0-dim tensors can occur: Tensor + Tensor Tensor + Scalar Instead we should only have Tensor + Tensor. However, we still need...