model_server
model_server copied to clipboard
How to Postprocess data from product-detection-100 openvino model while usinv ovms
Describe the bug I followed the vehicle detection example below to setup pipeline through ovms using openvino product-detection-100 model https://github.com/openvinotoolkit/model_server/blob/main/demos/vehicle_analysis_pipeline/python/vehicles_analysis_pipeline.py
I was able to preprocess data & pass it to inference, however I am unable to postprocess the output. Can I get some help with postprocessing.
Following is the output that I am receiving -
[outputs { key: "868" value { dtype: DT_FLOAT tensor_shape { dim { size: 1 } dim { size: 1 } dim { size: 200 } dim { size: 7 } } tensor_content: "\000\000\000\000\000\000\000@=\370\223=\314l\371=\334\334\322=^\177\215>~{\n?\000\000\000\000\000\000@@\346\247u?X\2429?\320Q\312=\366\333_?h\006\370>\000\000\000\000\000\0000A\332\277\177?\0366\267>\036\253\314=\346\216\035?\016\365\025?\000\000\000\000\000\0000A\323\215c=\346\3169?t\263\334=l/
?\373S\372>\000\000\000\000\000\000PAE\306u?\274\264\372=N\241\304=\031\014\216>\352A\n?\000\000\200\277\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\0`
To Reproduce Steps to reproduce the behavior:
- Steps to prepare models repository '...'
- OVMS launch command '....'
- Client command (additionally client code if not using official client or demo) '....'
- See error
Expected behavior A clear and concise description of what you expected to happen.
Logs Logs from OVMS, ideally with --log_level DEBUG. Logs from client.
Configuration
- OVMS version
- OVMS config.json file
- CPU, accelerator's versions if applicable
- Model repository directory structure
- Model or publicly available similar model that reproduces the issue
Additional context Add any other context about the problem here.