Kai-Hsin Wu
Kai-Hsin Wu
I would love to chat with @IvanaGyro and ppl who are involved on this. I feel like it would be nice to have a conversation on why it was design...
I suggest we have a meeting and maybe someone can take a note?
I believe this is the goal. The problem is the build process takes too long, so updating just user guides takes forever. The User guides were more designed as "blog"...
Let's chat on this a bit more. Specifically I also want to explore a bit on module from C++20
Would you mind focus on the refactor of storage first before bumping this up to C++20? Or you need any features from C++20?
I agree with @ianmccul . For now, yes. Let's do enum
This either can do using stream or just directly using threads. CUDA now give new default stream per thread instead of share same default stream.
The reason they stay for member function is because you might need different implementation depending on the impl of UniTensor (e.g. Fermion/Dense/Sparse)
I don't see how we can avoid doing that. If we want to support different backend. To add a backend one eventually will need to specify which target API to...
element-wise sign should be already implemented via unary operator-() overload? https://github.com/Cytnx-dev/Cytnx/blob/f2b2ed76c3ad855c4cef5100fc01cccf0724bc83/include/Tensor.hpp#L156 For the UniTensor I am not sure. You can use `-1.*UniTensor` for the interim if its not exits.