Supernovae
Supernovae
> yes, available since i installed deepstream. it must be tested on that sample? Or i can test on another .mp4 file? The one that is already presemt works with...
> So which means when i am doing testing c++ implementation. i do this `./deepstream-infer-tensor-meta-app -t infer /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264` instead of `./deepstream-infer-tensor-meta-app -t infer /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4` right
you need to create a .pkl file with all the facial embedding of a faces(be it single or multiple) of the same person and then compare that embedding with the...
> hi, any update on adding this feature to the repo? > > btw I think the accuracy is quite good when testing with a single image per ID in...
> Hi, i'm using this repo from a laptop with TensorRT (it's TensorRT 20.03 docker image), and also from a Jetson, the code works without problems in Jetson, the detections...
> Hi again, i've seen in NOTES section that .uff and .engine are GPU specific, i've tried to generate everything from step 3 (as you say in NOTES section) but...
> @shubham-shahh, > > Txs again for this. The code is looking fine now although I've requested a few formatting changes. > > Could you also squash some of the...
Hi, @rmackay9, are there any more changes to complete this PR? thanks
as suggested by @hendjoshsr71 [here](https://github.com/ArduPilot/ardupilot/pull/21343), added the yaw and yaw rate fields in GUIP
> LGTM. > > Would be nice if you showed some test evidence of the output from the log and the mavlink message. sure I can share logs and the...