Sourab Maity
Sourab Maity
Thanks for your response Today i removed that flags and all, now waiting for 48 hours to check this worked or not pattern=2 also used one time but that also...
After removing the flag it's working. Thank you for your support.
I used ``` import os os.environ['CUDA_LAUNCH_BLOCKING'] = '1' ``` and in thread function I added `model = YOLO('models/best_mcb_29_6_24_640.onnx', task="detect")` so now its load yolo model every time to ensure thread...
I can't share my code sharing logic ``` def frameInf(frame): model = YOLO('models/best_mcb_29_6_24_640.onnx', task="detect") model2 = YOLO('models/best_mcb_21_6_24_640.onnx', task="detect") results = model.predict( frame, conf=0.5, iou=0.6, imgsz=640, verbose=False,device = [0]) results_ =...
@Y-T-G to ensures thread safety, is that not require ? and how to solve that onnx GPU use issue on thread ?
i dont want to drop frame my cam sending 30fps and inference time is total with some other calculation 0.06 sec so, its allow 15 to 16 frame but It...
let 1 st frame take 10 sec then if 2nd frame enter in thread within 1 sec then that also consumes 10 sec but 2 frames I will receive in...
1 frame may consume GPU 10%, but rest 90% is left for other use using threading we can use that by parallel processing
but for batch if i wait for 5 frame then In show area also I need to wait. That looks like it sticking on one frame and then instantly after...