psi
psi copied to clipboard
Question: Convert Azure Kinect Joint Point to 3D pixel point
Hello, until now we were using the ToColorSpace method from the IDepthDeviceCalibrationInfo to get the different 2D pixel points of a body.
For further investigations of what a person does we would like to get the depth information as well. Is there a possibility to transfer the body points (e.g. body.Joints[joint].Pose.Origin) to 3D pixel points?
Many thanks in advance
Here are a couple different approaches I could think of:
-
Get the color space pixel point as before, but then pass that to
ProjectToCameraSpace(IDepthDeviceCalibrationInfo, Point2D, Shared<DepthImage>)method inCalibrationExtensions.cs. -
Compute a
Line3Dfrom the camera position through the 3D joint position, and intersect with the depth mesh usingIntersectLineWithDepthMesh(ICameraIntrinsics depthIntrinsics, Line3D line, DepthImage depthImage)method inCalibrationExtensions.cs.
Hopefully one of those can work for you!