YOLO-NAS-onnxruntime
YOLO-NAS-onnxruntime copied to clipboard
This repo provides the C++ implementation of YOLO-NAS based on ONNXRuntime for performing object detection in real-time.Support float32/float16/int8 inference.
It seems like a really good project. But since I am a beginner in C++, I am not sure how to implement this. When I try the CLI, it shows...
I am encountering the error in detector.cpp at this line `inputTensorValuesFp16.push_back(float32_to_float16(fp32));` I can see that the declaration of inputTensorValuesFp16 is `std::vector inputTensorValuesFp16;` Also the function float32_to_float16 returns uint16_t `static uint16_t...
There are only three choices in the Comvert Pytorch to ONNX model parameter, are there other models available, such as the basic yolov7 v8 model?
Hello, this works perfect on default coco model but doesnt work on a custom one. Did you try this yourself cause it does not work on custom model with checkpoint....
Can this run in Jetson Nano ? if yes please help me tutorial setting env
- Remove curl requirement
Hi! The ReadMe says to run ./demo but I don't see that script anywhere or how to draw the bounding boxes on the images. If anyone can send some resources...
` input_image.push_back(float32_to_float16(a));` error:No matching member function for call to 'push_back'