LiveScan3D
LiveScan3D copied to clipboard
Skeleton handling
It would be very useful to include skeleton handling this would allow for:
- segmenting people inside the merged point cloud,
- merging multipel skeletons to avoid occlusions,
- calibrating the rig using skeleton instead of marker data (no need for marker printing, a user simply walks through the scene and the server infers relative sensor positions based on skeleton data).
Skeletons are now partially implemented, it is possible to:
- segment people inside the merged point cloud,
- display unmerged skeleton in the live view window.
I am currently a bit busy working on other things so I am not pushing this forward. If anyone is interested in having more skeleton functionality, let me know what you need and I'll see what can be done.
Hi,
well what i would be really interested in is tracking multiple people in a room. So just the skeletons is fine, but especially also what they are looking at. I don't know if it is already possible to extract that data?
Hi,
The Kinect v2 SDK does provide orientation of all of the joints including head. I don't know how accurate the measurements are however. The other thing is that those orientations are currently not transmitted to the server.
The only skeleton data that is transmitted are the locations of the joints and their states (whether a joint is tracked or not). In order to start working with joint orientations you would have to add them to the data that is transmitted to the server, which should be fairly easy.
Marek