masahi
masahi
I think it is a good idea to invite people working on RISC-V support for TVM for review/discuss, since the RISC V vector extension is similar to ARM SVE. I...
Thanks @MeeraN7 @giuseros, I like the approach of making the vectorized loop explicit with `VL` parameter at the TIR level, in contrast to how the fixed-width vectorization is done today....
Thanks @sjoerdmeijer @giuseros, I didn't imagine that there would be a case where mixing fixed and scalable vectorization is beneficial. I prefer `s[C].vectorize(..., scalable=True)` to `s[C].vectorize_scalable(...)` but both seem fine....
@YuchenJin > Relax can be viewed as complementary to Relay. Relay focuses on high-level op transformations, while the current Relax passes focus on TIR-graph co-transformations that can enable flexible fusion...
We just need to add constant folding on the fill value. See this commit https://github.com/masahi/tvm/commit/3e88280ec3a0b943d8aac76a7a99f75ffd0ac863. Can you make a PR with your test case?
You need to include my change in `frontend/pytorch.py` from https://github.com/apache/tvm/issues/12844 as well.
@tqchen Thanks, I like the idea of the canonicalized key. Last week I discussed with @zxybazh on the possibility of identifying two subgraphs with the identical anchor op workload (conv2d...
Superceded by https://github.com/apache/tvm/pull/13091
I just ran the tests after refactoring change in https://github.com/apache/tvm/pull/12895, it seems they have been fixed now.
@tvm-bot rerun