edgeai-benchmark
edgeai-benchmark copied to clipboard
about the param.yaml(how to product)
excuse me
can you explain how i can product the .yaml file of my model in details? or i need to learn which scripts?
thanks
For compiling your own models and producing compiled output, you can use the following script: https://github.com/TexasInstruments/edgeai-benchmark/blob/master/run_custom_pc.sh Which invokes this: https://github.com/TexasInstruments/edgeai-benchmark/blob/master/scripts/benchmark_custom.py
See the above script and modify it to compile your own model.
Hello, I'm sorry to bother you again, but when I compile my model trained on my own datasets, some mistake occurs like that:
and my pipeline_configs is defined as follows:
can you help me solute my blocks? i'm very glad to expect your reply!
very Thanks.
i'm sorry, ‘不是目录’ is present ‘Not a directory’in the first picture !
The assertion error seems to be saying that: dataset_selection is set, but dataset_category is not defined in pipeline_config:
Is this the error that you are referring to?
Wondering why dataset_selection is set. Have you set dataset_selection in settings_base.yaml by any chance?
emmm, i change the dataset_selection as coco, the follows is mine
thanks for your reply!
If you set dataset_selection to default value (null), are you able to run the script?
yeah i have tried that you said, the mistake didn't occur, but i don't know why the compile time is so long so long? i can set somewhat to decrease the compile time? expect to reply!
very thanks!
Thanks for your help. i have completed the compile process, but i have some questions that need your help.
-
the 'run_import' and 'run_inference' present what, or tell me how to set them? True or False
-
what's the function of the params('num_frames' and 'calibration_frames'), and i should how to set them, i guess they influence the precision of model-deploying?
i expect your soon reply ! very thanks!
calibration_... parameters influences the import time and accuracy. Reducing these may reduce accuracy for some models. 25, 25 may give good results, but for quick experimentation you can use 10, 10 as you have shown above. num_frames determines the number frames used for inference.
Thanks for your guide very much, i have tried this model to TDA4 board and find the accuracy is so low, i will modify the calibration_frames into 25 or 50 and verify it again.
But i have another question about the TDA4VM demo show on TDA4VM-SK-SDK. If you know, you can explain for me, if not, i will also appreciate it very much for your help.
when i modify the inputs's resolution(width and height) from 640x640 to 1920x1080[pic_1], to my surpuise, the fps is almost the same from 16.65 to 18.65[pic_2], emm, my model is yolov5l.
@shyam-j Can you take a look at this?
Thanks for your guide very much, i have tried this model to TDA4 board and find the accuracy is so low, i will modify the calibration_frames into 25 or 50 and verify it again.
But i have another question about the TDA4VM demo show on TDA4VM-SK-SDK. If you know, you can explain for me, if not, i will also appreciate it very much for your help.
when i modify the inputs's resolution(width and height) from 640x640 to 1920x1080[pic_1], to my surpuise, the fps is almost the same from 16.65 to 18.65[pic_2], emm, my model is yolov5l.
![]()
Hi @Onehundred0906
Here the framerate is capped by the DLInference time (44ms <---> FPS = 1000/44 ~ 22)
Thanks for your reply