Tianqi Chen

Results 637 comments of Tianqi Chen

@diyessi again I understand the advantage of compilation tricks and primitive ops, and they can be achieved with the approach in my last post. But my point is there are...

this enables shape inference to automatically fill in this shape, as in http://mxnet.io/api/python/symbol.html#mxnet.symbol.Reshape

opened a voting thread https://github.com/apache/tvm/issues/12651

Thank you @MeeraN7 for the RFC. SVE is certainly an interesting topic. Because we do not yet have SVE support in TIR. It would be useful to think carefully about...

cc @junrushao1994 @vinx13 as it can be related to the future tensorization optimizations

Thanks @MeeraN7 . I think the overall loop legalization makes sense. I wonder then if it is necessary to update the node construct such as Ramp, or can we directly...

Thanks @MeeraN7 @giuseros, to make the discussion more concrete, right now the IR after legalization looks like ```c++ for (i: int32, 0, 17;i+=VL) { C_2[ramp(i, 1, VL)] = ((int8xVL*)A_2[ramp(i, 1,...

Thanks @MeeraN7 . Yes I get what you mean. Right now we are adding a "is_scalable" field to indicate that the broadcast and ramp are "context dependent" on VL. Additionally,...

Thanks @sjoerdmeijer , sorry for getting back to this late. If LLVM also encodes the SVE vector as a special type(and not tying the n) it could be a precedence...

The ABI issue is important since it affects the DLPack, so I would suggest we agree on the encoding convention before we proceed. I am fine with having a scalable...