edgeai-modelzoo
edgeai-modelzoo copied to clipboard
Help for postprocessing depth estimation model
Hi,
Is there any examples code for postprocessing the depth estimation model (fast depth) in the TDA4VM. I have compiled the model artifacts and then i realized there is only support for classification, segmentation and object detection. I looked into this repo for adding custom postprocessing, but it did focussing on drawing boudaries over the image. i want to get the depth map of the image.
Are you asking about adding this postprocessing for depth estimation in edgeai-gst-apps in the sdk? What is the exact postprocessing needed? You can modify the python postprocessing code to add anything that you need. CC: @shyam-j
Yes. i'm asking about the edgeai-gst-appsin the sdk. And i want the depth map of the image as output.
Hi,
You can go though this fork of edgeai-gst-apps https://github.com/TexasInstruments/edgeai-gst-apps-human-pose
Here we have explained in detail how to bring in your own custom model and add post-processing for it.
Please got through README and this commit
https://github.com/TexasInstruments/edgeai-gst-apps/commit/284cbedfc3c949a51d71ad6a937cfbfe389ad145#diff-33663c40365a3fa2e5c918a89e31ec89b83dc4e2136ca4b02102ca3583fbc95a
If you're asking about support for depth estimation post-processing, it is not there in sdk by default. You will have to look at the output tensors of the model and write code for the custom post-process based on the output tensors.
I assume that the depth estimation results will be similar to semantic-segmentation, i.e, each pixel is assigned a value. @mathmanu can comment better on it. If this is the case, then you can take some references from sem-seg post-process
Hi @abhaychirania2411 , Thanks for the input. I will look into it.