tvm
tvm copied to clipboard
Open deep learning compiler stack for cpu, gpu and specialized accelerators
I'm compiling an ONNX model which I realized the input name has this format `input:0`. When I import the model and run it, it works fine, however if I try...
This fixes the issue where merging from GitHub Actions (i.e. with the default `GITHUB_TOKEN`) doesn't trigger post-merge GitHub Actions on the commit it creates in `main`. Instead these jobs are...
This PR adds an initial version of a hand-coded HVX intrinsics implementation of conv2d for Hexagon target. @kparzysz-quic is a co-author to the patch as he has written many of...
Pass that fuses nn.pad and qnn.conv2d for CMSIS-NN target. cc @Mousius @areusch @grant-arm @lhutton1 @manupa-arm
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them...
This PR builds and tests TVM (running the CPP and unittests) under minimal configuration with some debug flags enabled: - `USE_RELAY_DEBUG=ON` in TVM - `-Wp,-D_GLIBCXX_ASSERTIONS` in TVM - `-DLLVM_ENABLE_ASSERTIONS=ON` in...
LLVM has switched to C++17 in its development branch. Follow suit to be able to compile LLVM headers.
**Background** This PR adds 2 tests for unused HVX implementations of mmul and conv2d. These were created to experiment with the performance of HVX intrinsics in tvm and each test...
Main goals for this PR are the following: 1. E2E compilation should work without QNN canonicalization on the Hexagon target. 2. Enable int8 -> int8 computation for dense/conv2d operation. What...
Hi all, I was trying to convert an ONNX model to TVM and the conversion fails. The error states: > tvm.error.OpNotImplemented: The following operators are not supported for frontend ONNX:...