Not able to integrate kwarg into gvapython callback
Hi!
I am building this pipeline:
gst-launch-1.0 \ rtspsrc location="$INPUT_PATH" protocols=tcp ! \ rtph264depay ! h264parse ! avdec_h264 ! \ videoconvert ! capsfilter caps="video/x-raw,format=BGRx" ! \ gvadetect model=${MODEL} model_proc=${MODEL_PROC} device=CPU ! \ gvafpscounter interval=1 ! \ gvapython module=/home/dlstreamer/config/scripts/zone_detection.py class=FrameProcessor kwarg="{\"zone_config\":\"$ZONE_JSON\"}" ! \ gvametaconvert ! \ gvametapublish file-format=json-lines file-path=$METADATA_FILE ! \ videoconvert ! x264enc tune=zerolatency ! \ rtspclientsink location=$OUTPUT_PATH
Here, I need to access $ZONE_JSON, which is processed as follows in the backend:
'ZONE_JSON': json.dumps(zone_json)
In my gvapython function:
class FrameProcessor:
def __init__(self, **kwargs):
self.zone_config = kwargs.get("zone_config", "{}")
if isinstance(self.zone_config, str):
self.zone_config = json.loads(self.zone_config)
def process_frame(self, frame: VideoFrame):
"""Main function to process and visualize each frame."""
try:
zone_json = self.zone_config
and further processing ...
I have been trying to troubleshoot it, coming up with various errors, but the current one is this:
The elements inside this (attributes, occluded, false) are actually keys in the zone_json file. Can you help me troubleshoot this? I can't find the necessary documentation to pass a json string into the pipeline usinggvapython's kwarg, so I haven't been able to resolve it.
Thank you so much!
A slightly different escaping is required.
I found an example here:
https://github.com/dlstreamer/dlstreamer/blob/86ad90cfc0adfc3c8dbee63e3005a828c2a0e772/samples/gstreamer/gst_launch/gvapython/face_detection_and_classification/face_detection_and_classification.sh#L69
... RIPT3 class=AgeLogger function=log_age kwarg={\\"log_file_path\\":\\"/tmp/age_log.txt\\"} ! $SI...
Hi! Thank you so much for replying.
Tried this:
gvapython module=/home/dlstreamer/config/scripts/zone_detection.py class=FrameProcessor kwarg={\\"zone_config\\":\\"$ZONE_JSON\\"} ! \
The same issue is showing up (see attached screenshot in my previous comment).
Initially you wrote
The elements inside this (attributes, occluded, false) are actually keys in the zone_json file
Looks like the content of that "$ZONE_JSON" would require escaping, too. For a basic test, could you manually modify the content of "$ZONE_JSON"? Could you do it automatically in a script, or use something like base64?
Zone JSON: "{\"licenses\": [{\"name\": \"\", \"id\": 0, \"url\": \"\"}], \"info\": {\"contributor\": \"\", \"date_created\": \"2025-04-24T11:03:29.957Z\", \"description\": \"Zone Graph Points\", \"url\": \"\", \"version\": \"1.0\", \"year\": \"2025\"}, \"categories\": [{\"id\": 1, \"name\": \"no_entry_zone\", \"supercategory\": \"\"}], \"images\": [{\"id\": 1, \"width\": 1280, \"height\": 720, \"file_name\": \"video_frame.png\", \"license\": 0, \"flickr_url\": \"\", \"coco_url\": \"\", \"date_captured\": 1745492609958}], \"annotations\": [{\"id\": 1, \"image_id\": 1, \"category_id\": 1, \"segmentation\": [[92, 70, 190, 66, 210, 97, 283, 205, 426, 352, 101, 354, 106, 222, 97, 129]], \"area\": 56219.5, \"bbox\": [92, 66, 334, 288], \"iscrowd\": 0, \"attributes\": {\"occluded\": false}}]}"
Escaping was already done when I retrieved it from the DB. I'll create a temp file and use the file path to fetch it inside the gvapython callback. Will keep you updated!
Wouldn'd double-escaping be required?
(Would you need the whole JSON-string inside FrameProcessor or only some of the attributes?)
You're right, double-escaping shpuld resolve this since I need the whole JSON-string. Right now, I've found a workaround by storing the zone json files temporarily and sending the file path in the kwarg instead of the JSON string. I think this would work better than storing the whole stringified JSON in the environment of the docker. What do you think?
Try using a shortened version of the JSON-string with double-escaping - just to see if that is the root cause.
Providing a path to a (temp)file and let FrameProcessor process it sounds ok... you mentioned
retrieved it from the DB
Could FrameProcessor retrieve it from the DB, too, or would that be an unwanted dependency (or introduce initial latency)?
Could the temporary file change often, or would there be concurrent gstreamer pipelines (or multiple gvapython) using different (temp)files?