realsense-processing icon indicating copy to clipboard operation
realsense-processing copied to clipboard

Pixel to 3D Space

Open danielleamya opened this issue 2 years ago • 3 comments

Hi,

I am using the Realsense camera and this library for an application using blob tracking; currently I am able to get XY position for the center position of the blob in pixels. Is there a way using this library to convert 2D pixel points to 3D space coordinates in meters using the camera as the origin? I would like to be able to use my mouse to click on an object in frame and be able to have a 3D coordinate returned back representing the distance from the camera in XYZ dimensions.

Thanks for your help! Danielle

danielleamya avatar Aug 07 '21 16:08 danielleamya

Yes it is possible to get the distances at a specific point in the depth image. Have you looked at the documentation already? There is a part about measuring distance.

cansik avatar Aug 15 '21 17:08 cansik

Yes, I have gone through the documentation and am aware of the ability to measure distance. However, this returns just a single float value to reflect the distance away from the camera in meters, if I understand correctly. What I need is a 3D array or vector, that includes 3 floats (X, Y, Z) that represents the position of that point away from the camera in meters. Have I missed this capability in the documentation? My apologies if that is so.

danielleamya avatar Aug 16 '21 18:08 danielleamya

Ok, maybe the PointCloud examples is what you are looking for. There the depth frame is converted into a pointcloud / list of vertices (Vertex[]).

But to be honest, I don't know about the units the vertices are in. Maybe you find more information in the Intel Documentation, which mentions the rs2_deproject_pixel_to_point method to project pixels into 3d space with the camera intrinsics. This is not directly implemented in the API at the moment but could be accessed liked this:

// import
import static org.bytedeco.librealsense2.global.realsense2.rs2_deproject_pixel_to_point;

// use the method
rs2_deproject_pixel_to_point(...)

cansik avatar Aug 17 '21 07:08 cansik

As there are more questions about this topic I guess it makes sense to add it to the processing API in the future. For now I have implemented an example sketch which shows how to use this method. I could not test it yet due to the M1's limitations.

import ch.bildspur.realsense.*;
import ch.bildspur.realsense.type.*;

import org.intel.rs.frame.DepthFrame;
import org.intel.rs.stream.VideoStreamProfile;

import org.bytedeco.librealsense2.rs2_intrinsics;

import org.bytedeco.javacpp.FloatPointer;
import org.intel.rs.types.Intrinsics;
import org.intel.rs.types.Pixel;
import org.intel.rs.types.Vertex;

import static org.bytedeco.librealsense2.global.realsense2.rs2_deproject_pixel_to_point;

RealSenseCamera camera = new RealSenseCamera(this);

void setup()
{
  size(640, 480);

  // enable depth stream
  camera.enableDepthStream(640, 480);

  // enable colorizer to display depth
  camera.enableColorizer(ColorScheme.Cold);

  camera.start();
}

void draw()
{
  background(0);

  // read frames
  camera.readFrames();

  // read raw depth frame
  DepthFrame depthFrame = camera.getFrames().getDepthFrame();

  // extract video stream profile and read intrinsics
  VideoStreamProfile profile = new VideoStreamProfile(depthFrame.getProfile().getInstance());
  rs2_intrinsics intrinsics = profile.getIntrinsics();

  // get depth at specific point
  int x = 200;
  int y = 150;
  float depth = camera.getDistance(x, y);

  // project point
  Vertex vertex = deprojectPixelToPoint(intrinsics, x, y, depth);

  PVector v = new PVector(vertex.getX(), vertex.getY(), vertex.getZ());
  print("Coordinates are: " + v);

  // show color image
  image(camera.getDepthImage(), 0, 0);
}

public static Vertex deprojectPixelToPoint(final rs2_intrinsics intrinsics, final int x, final int y, final float depth) {
  FloatPointer point = new FloatPointer(3);
  FloatPointer pixelPtr = new FloatPointer(2);
  pixelPtr.put(0, x);
  pixelPtr.put(1, y);
  rs2_deproject_pixel_to_point(point, intrinsics, pixelPtr, depth);

  return new Vertex(point.get(0), point.get(1), point.get(2));
}

cansik avatar Sep 21 '22 08:09 cansik

The method is now implemented into version 2.4.3.

cansik avatar Sep 21 '22 12:09 cansik