tvm
tvm copied to clipboard
Open deep learning compiler stack for cpu, gpu and specialized accelerators
### Actual behavior ``` Traceback (most recent call last): File "/data/qshenaf/remote_pc/TirFuzz/bugs/bug1.py", line 10, in mod = tir.transform.FP8StorageLegalize()(mod) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/qshenaf/envs/tvm/python/tvm/ir/transform.py", line 238, in __call__ return _ffi_transform_api.RunPass(self, mod) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "tvm/_ffi/_cython/./packed_func.pxi",...
The [quick_start](https://tvm.apache.org/docs/get_started/tutorials/quick_start.html) example fails: ``` Traceback (most recent call last): File "/usr/ports/misc/py-tvm/test-x.py", line 30, in ex = tvm.compile(mod, target) ^^^^^^^^^^^ AttributeError: module 'tvm' has no attribute 'compile' ``` Version: 0.19.0...
In this PR I have added a new spec for PagedKVCache, this spec will replace the object spec in downstream mlc-llm for pagedKVCache.
### Expected behavior For the following onnx model, the output of "v6_0" should be 0. ```python [array(518, dtype=int64), array([ True, True, True, True, True, True, True, True, True, True, True,...
### Expected behavior I expect `model.so` (a TVM-compiled model for MIPS32) to be successfully loaded using `dlopen` or `TVMModLoadFromFile` on my MIPS32 development board, given that `libtvm_runtime.so` loads and runs...
As the title says. In particular the following statement in `very_well_formed.cc` ``` Verify(it == currently_defined_.end() || redefine_is_allowed)
``` [ RUN ] AProfileParser.DefaultSVESupportSVESupport /usr/ports/misc/tvm/work/tvm-0.19.0/tests/cpp/target/parsers/aprofile_test.cc:320: Failure Value of: Downcast(features.at("has_sve")) Actual: false Expected: true [ FAILED ] AProfileParser.DefaultSVESupportSVESupport (8 ms) ``` ``` [ RUN ] AProfileParser.DefaultFP16Support /usr/ports/misc/tvm/work/tvm-0.19.0/tests/cpp/target/parsers/aprofile_test.cc:362: Failure Value of:...
TVM is not compilable with CUDA 11.4 due to missing symbols _CUDA_R_8F_E4M3_ and _CUBLASLT_MATMUL_DESC_A_SCALE_POINTER_ (Which have been introduced in CUDA 12.x afaik) ### Expected behavior successful compile ### Actual behavior...
### Expected behavior The onnx frontend should import the model correctly. ### Actual behavior ```c Error converting operator Slice, with inputs: [R.shape_of(v12_0), metadata["relax.expr.Constant"][0] # Metadata omitted. Use show_meta=True in script()...
### Expected behavior The onnx frontend should import the model correctly. ### Actual behavior ```c Error converting operator Expand, with inputs: [R.mean(lv4, axis=[2], keepdims=False), metadata["relax.expr.Constant"][0] # Metadata omitted. Use show_meta=True...