ATen icon indicating copy to clipboard operation
ATen copied to clipboard

ATen: A TENsor library for C++11

Results 68 ATen issues
Sort by recently updated
recently updated
newest added

``` struct ErrorReport : public std::exception { ErrorReport(const ErrorReport& e) : ss(e.ss.str()), context(e.context), the_message(e.the_message) {} ErrorReport() : context(nullptr) {} ErrorReport(TreeRef context) : context(context) {} virtual const char* what() const noexcept...

The ATen way(TM) is to *always* compile, and give an error at runtime if a feature isn't supported. This means we must not feature test for CUDA in headers.

In the past, we got away with not very detailed error messages, because you could always figure it out by looking at the call stack, seeing where we called from...

Consider the following C++ transcript: ``` MacBook-Pro-97:cpp-inline-method ezyang$ cat A.h void f(); MacBook-Pro-97:cpp-inline-method ezyang$ cat B.h #include "A.h" #include inline void f() { std::cerr

I was recently porting some Python code to ATen and I noticed some small API discrepancies. I'm going to fix some of these. - [ ] No `type_as` method -...

My understanding of the situation: * A `foo_out` function will perform a `resize` on the output tensor, and then write data into it. * A `foo_` function will NOT perform...

Right now ATen inconsistently uses int64_t versus int. We should decide what we actually want to do, and stick to it.

In PyTorch, we have an AutoGPU guard for calling cudaGetDevice appropriately based on input tensors, so that we put the result in the correct tensor. At the moment, we manage...

Right now, there are ad hoc forward declarations of `at::Tensor` in ScalarType and a few other headers. This means that if you transitively include these headers, you get a forward...

At the moment, we seem to have multiple definitions of tanh. From Declarations: ``` [[ name: tanh types: - floating_point backends: - CPU - CUDA variants: - method - function...