KinectProjectorToolkit icon indicating copy to clipboard operation
KinectProjectorToolkit copied to clipboard

How can I get RealWorldPoints?

Open iamlegolas opened this issue 8 years ago • 7 comments

"So given real world PVector realWorldPoint, the projected coordinate is accessible via:

PVector projectedPoint = kpt.convertKinectToProjector(realWorldPoint);"

Referring to this, what do you mean exactly when you say "realWorldPoint". Is it simply a 3d point out of the Kinect depth stream or something else? How do i get such points? Also, if possible, could you tell me how to get such a 3d point in Python?

@genekogan

@2075 @Kulbhushan-Chand @dattasaurabh82

iamlegolas avatar Dec 05 '16 13:12 iamlegolas

realWorldPoint is a sampled point from the depth map given by the Kinect. for example, sampling a point from the mesh of a person found in the kinect. the example programs show different examples of this, and there are multiple ways to obtain them, depending on what you are trying to do.

this library is written for SimpleOpenNI in java, doing it in python is beyond the scope of this repository.

genekogan avatar Dec 08 '16 19:12 genekogan

@genekogan

PVector realWorldPoint = kpt.getDepthMapAt(startX, startY); PVector projectedPointUno = kpt.convertKinectToProjector(realWorldPoint);

startX and startY are X and Y coordinates off the Kinect image. projectedPointUno should be the same point from the Kinect on the Projector but it doesn't seem to be working like that. Can you help me with what's wrong? The calibration's pretty solid.

Please get back soon!

iamlegolas avatar Feb 09 '17 13:02 iamlegolas

See the calibration example. there is a method there "getDepthMapAt".

PVector getDepthMapAt(int x, int y) {
  PVector dm = depthMap[kinect.depthWidth() * y + x];
  return new PVector(dm.x, dm.y, dm.z);
}

depthMap is an array of PVectors with the entire depth map:

SimpleOpenNI kinect;
PVector[] depthMap = kinect.depthMapRealWorld();

then convertKinectToProjector should work.

genekogan avatar Feb 12 '17 11:02 genekogan

kpt.setDepthMapRealWorld(kinect.depthMapRealWorld());

PVector realWorldPoint = kpt.getDepthMapAt(startX, startY); PVector projectedPointUno = kpt.convertKinectToProjector(realWorldPoint);

startX and startY are X and Y coordinates off the Kinect image. projectedPointUno should be the same point from the Kinect Image on the Projector but it doesn't seem to be working like that. What's wrong? :S

I've done everything exactly as you've mentioned it and as it is in all the tutorials.

The x and y that we're passing to the getDepthMap() fn are coordinates from the Kinect image, right?

@genekogan

iamlegolas avatar Feb 12 '17 11:02 iamlegolas

`import controlP5.; import gab.opencv.; import SimpleOpenNI.; import KinectProjectorToolkit.;

// For Kinect's RGB stream + the KPT: OpenCV opencv; SimpleOpenNI context; KinectProjectorToolkit kpt;

PImage currKinectFrameRGB; int startX, startY, endX, endY;

void setup(){ size(100, 100, P2D);

// Setting up the Kinect: context = new SimpleOpenNI(this); if(!context.isInit()){ println("Can't initialize SimpleOpenNI, camera not connected properly."); exit(); return;
} context.setMirror(false); context.enableDepth(); context.enableRGB(); context.alternativeViewPointDepthToImage();

opencv = new OpenCV(this, context.depthWidth(), context.depthHeight()); //What's this for?

// Setting up the KPT: kpt = new KinectProjectorToolkit(this, context.depthWidth(), context.depthHeight()); kpt.loadCalibration("calibration.txt"); kpt.setContourSmoothness(4);
}

void draw(){ context.update(); kpt.setDepthMapRealWorld(context.depthMapRealWorld());

PVector realWorldPoint = kpt.getDepthMapAt(207, 222); PVector projectedPointUno = kpt.convertKinectToProjector(realWorldPoint); realWorldPoint = kpt.getDepthMapAt(293, 312); PVector projectedPointDos = kpt.convertKinectToProjector(realWorldPoint);

print("ProjPoint1: "); println(projectedPointUno); print("ProjPoint2: "); println(projectedPointDos); }`

@genekogan This is the very simple program that I'm trying to get to run. The calibration I know is working because I've tested it in the CALIBRATION.pde file. Please take out some time to have a look see.

Regards

iamlegolas avatar Feb 15 '17 13:02 iamlegolas

@genekogan Please spare a few minutes of your time and read my previous comment. Thanks

iamlegolas avatar Feb 20 '17 09:02 iamlegolas

startX and startY in your example should be coordinates in the kinect depth image (usually between 0-640, 0-480 in xy direction, corresponding to the size of the depth image). those two lines will translate this to a projector coordinate which are between 0 and 1 (agnostic to screen size). you still need to multiply these by your projector width and projector height if you haven't done that already. look at projectedPointUno... is it between 0 and 1 on both? if so, try multiplying it by projector width and height.

genekogan avatar Feb 22 '17 15:02 genekogan