tvm
tvm copied to clipboard
[Bug] Building erros for hexagon_launcher
When building hexagon_launcher as the guide: https://github.com/apache/tvm/tree/main/apps/hexagon_launcher,there are some errors:
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:661:76: error: no member named 'invoke_result_t' in namespace 'std'
template <typename F, typename = std::enable_if_t<std::is_same_v<T, std::invoke_result_t<F, T>>>>
~~~~~^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:661:92: error: 'F' does not refer to a value
template <typename F, typename = std::enable_if_t<std::is_same_v<T, std::invoke_result_t<F, T>>>>
^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:661:22: note: declared here
template <typename F, typename = std::enable_if_t<std::is_same_v<T, std::invoke_result_t<F, T>>>>
^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:661:99: error: expected member name or ';' after declaration specifiers
template <typename F, typename = std::enable_if_t<std::is_same_v<T, std::invoke_result_t<F, T>>>>
^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:784:43: error: no template named 'invoke_result_t' in namespace 'std'
template <typename F, typename U = std::invoke_result_t<F, T>>
~~~~~^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:792:47: error: no template named 'is_same_v' in namespace 'std'; did you mean 'is_same'?
the hexagon sdk version is 4.5.
@quic-sanirudh @abhikran-quic @kparzysz-quic @sdalvi-quic
solved by adding cmake option: -DCMAKE_CXX_STANDARD=17 .
solved by adding cmake option:
-DCMAKE_CXX_STANDARD=17.
Thanks for the issue. If you're interested, please feel free to send a PR to update the docs so that it's helpful to others.
Ok,I will add this to readme.
However, there are some other questions:
- Can we import other framework model such as onnx?
- Can we import a float32 model and using AIMET quantify encoding information?
- Can we use qnn as the runtime by BYOC?
- After import the inceptionv4 TFLITE model,and then trying to building for it as:
mod, params = relay.frontend.from_tflite(tflite_model)
target = tvm.target.hexagon('v66', hvx=0)
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, tvm.target.Target(target, host=target), params=params, mod_name="default")
there is an error: LLVM ERROR: Do not know how to split the result of this operator!
Ok,I will add this to readme.
However, there are some other questions:
- Can we import other framework model such as onnx?
- Can we import a float32 model and using AIMET quantify encoding information?
- Can we use qnn as the runtime by BYOC?
- After import the inceptionv4 TFLITE model,and then trying to building for it as:
mod, params = relay.frontend.from_tflite(tflite_model) target = tvm.target.hexagon('v66', hvx=0) with tvm.transform.PassContext(opt_level=3): lib = relay.build(mod, tvm.target.Target(target, host=target), params=params, mod_name="default")there is an error:
LLVM ERROR: Do not know how to split the result of this operator!
- Importing onnx models through onnx importer in relay is supported. There are some examples in hexagon contrib tests you can refer.
- No. AIMET quantization is not supported in TVM
- No, we don't support QNN through BYOC.
- That sounds like an error in LLVM lowering, which needs to be fixed in LLVM. Please post a separate issue with steps to reproduce and we can try to fix it.
Let me know if it's okay to close this issue as you figured out the fix.
Ok, I will add this to readme file.