Sergio Sánchez Ramírez
Sergio Sánchez Ramírez
so the solutions passes through setting the flag `--cpu=aarch64` so that the correct configurable build attribute is set. but if I add it to `BAZEL_BUILD_FLAGS`, then I get the following...
I think we are missing the tool chain configuration for aarch64-linux from Yggdrasil in Reactant. I'm trying to fix it in the branch `reactant-aarch64-linux`.
UPDATE: I think I managed to add Yggdrasil's aarch64-linux-gnu cross-compilation toolchain (first time I do sth in Bazel and kinda works)! Now I'm getting some usual cross-compile errors like headers...
@wsmoses this error is weird because we add the paths where "features.h" is to the toolchain ``` sandbox:${WORKSPACE}/srcdir/Reactant.jl/deps/ReactantExtra # find /opt/aarch64-linux-gnu/ -name features.h /opt/aarch64-linux-gnu/aarch64-linux-gnu/include/c++/10.2.0/parallel/features.h /opt/aarch64-linux-gnu/aarch64-linux-gnu/sys-root/usr/include/features.h ``` I added those paths...
the only thing not working right now is `aarch64-linux-gnu` with CUDA, most probably because it's using GCC and we should be using clang let's merge it like it is now,...
The edge cases that are currently failing are the following: 1. `A[j, k] = C[i, j, k]` Sum over dim (`i`) 2. `A[i, j] = C[i, j, i]` Diagonal selection...
I'm thinking that we should have sth like a "EinsumInterface.jl", similar to what "DifferentiationInterface.jl" does for AD engines.
@lkdvos I've been thinking how to circunvent these issues and what can we do from Tenet's side. so for each case...: - case 1: i can just detect and call...
ahh i can't use the `@tensor` macro because the indices are chosen during runtime, so my `contract` function directly calls `tensortrace!` and `tensorcontract!` i can detect these edge cases in...
Rebased on top of #329