OpenSeeFace icon indicating copy to clipboard operation
OpenSeeFace copied to clipboard

any interest in supporting oak-d/oak-d-lite camera?

Open silverhikari opened this issue 2 years ago • 8 comments

as stated above do you have any interest in supporting the oak-d line of spatial 3d cameras?, as with the kickstarter that is going around they are now at the same price as the leap controller though that price will rise. the products use a open python sdk called depthai.

silverhikari avatar Oct 16 '21 05:10 silverhikari

Those cameras look interesting, but OpenSeeFace is mainly concerned with inferring landmarks from RGB images and a putting together the final training dataset for the models was a lot of work. I'm unlikely to find the time or resources to put together anything remotely similar for depth cameras.

emilianavt avatar Oct 16 '21 09:10 emilianavt

@silverhikari theoretically you can take the onnx model and convert it to OpenVino so it run on the Oak-D.

I got a OAK-D Lite and will try to make it work with VSeeFace. If i dont forget i can inform you if my experiment works out.

@emilianavt the OAK-Ds have RGB cameras too, so technically you only need to convert the model (there is a python script for that) and interface the camera instead of calling the CNN yourself.

Using the Position data from depthcamera is more of a bonus (or in my case for handtracking instead of a leapmotion)

TheMasterofBlubb avatar Jan 16 '22 00:01 TheMasterofBlubb

I see, if they have RGB too, it should work!

emilianavt avatar Jan 16 '22 01:01 emilianavt

yep there is a 4k RGB camera (the middle one usually), but also the stereo cameras are accessible individually as black and white cams (480p iirc) The more interesting thing is to run the NN on the cam though as it has a AI-Chip onboard and then just grabbing the output data, hence the conversion to OpenVINO

TheMasterofBlubb avatar Jan 16 '22 01:01 TheMasterofBlubb

btw, do you by any chance have a layout of the OSC / VMC protocol that VSeeFace uses (the message names so to speak) im not very good with Japanese andd it seems not creating landmarks (if not specifically needed would be helping a lot)

TheMasterofBlubb avatar Jan 16 '22 02:01 TheMasterofBlubb

The VMC protocol only transmits blendshapes and bones. OpenSeeFace's face tracking data is transmitted using custom UDP packets. It's probably easiest to understand from the parser: https://github.com/emilianavt/OpenSeeFace/blob/master/Unity/OpenSee.cs#L137

There is also some English language documentation on the VMC protocol here: https://protocol.vmc.info/english.html

emilianavt avatar Jan 16 '22 17:01 emilianavt

Oh thank you for that link, i couldnt find that on the site, probably cause you come to the jap version from google.

Yep have found that parser, currently trying to reverse engineer where values are coming from and what they mean.

Im absolutely not familiar with python, so my small tool will be C# with some C++(sadly DepthAI has only Python and C++ APIs).

But i found some simple examples that include gaze and headtracking so if a converted model of yours wont work out of the box i will try to go with that one and try to match the OpenSee protocoll.

Holen Sie sich Outlook für Androidhttps://aka.ms/ghei36


From: Emiliana @.> Sent: Sunday, January 16, 2022 6:55:44 PM To: emilianavt/OpenSeeFace @.> Cc: TheMasterofBlubb @.>; Comment @.> Subject: Re: [emilianavt/OpenSeeFace] any interest in supporting oak-d/oak-d-lite camera? (#32)

The VMC protocol only transmits blendshapes and bones. OpenSeeFace's face tracking data is transmitted using custom UDP packets. It's probably easiest to understand from the parser: https://github.com/emilianavt/OpenSeeFace/blob/master/Unity/OpenSee.cs#L137

There is also some English language documentation on the VMC protocol here: https://protocol.vmc.info/english.html

— Reply to this email directly, view it on GitHubhttps://github.com/emilianavt/OpenSeeFace/issues/32#issuecomment-1013922274, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADBTO5HFIKSJRERZMRO72SLUWMBCBANCNFSM5GDIAOHA. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you commented.Message ID: @.***>

TheMasterofBlubb avatar Jan 16 '22 18:01 TheMasterofBlubb

If you are not familiar with python, the trickier part might be figuring out the decoding for the model's output. The current code for that is a bit dense and optimized:

https://github.com/emilianavt/OpenSeeFace/blob/baff2c0256bbed0927fe7b0eb8b183586e0714ec/model.py#L168-L178

In some very early versions, there should be a more readable function for decoding landmarks in tracker.py though.

Edit: I found it:

https://github.com/emilianavt/OpenSeeFace/blob/0690bdd15e50de293085d6408b3ce5d30cfb60de/tracker.py#L105-L111

https://github.com/emilianavt/OpenSeeFace/blob/0690bdd15e50de293085d6408b3ce5d30cfb60de/tracker.py#L641-L660

emilianavt avatar Jan 16 '22 19:01 emilianavt

@TheMasterofBlubb How did you get on with converting the model to OpenVino and generating OpenSEeFace compatible packets?

PheebeUK avatar Dec 27 '22 17:12 PheebeUK