panoptic-toolbox
panoptic-toolbox copied to clipboard
How to calibrate hdpose3d data with Kinect sensors, and make them synchronously ?
Appreciated with your dataset!
I want to get the pose3d information calibrated with Kinect data for training. Is it possible or not? Not good at MATLAB script, and got puzzled with KinopticStudio Toolbox soon. Could anyone give me some ideas to start with it?
Hi,
They are all in the same calibration space! Have you tried to visualize them together (just plot both skeletons and pt cloud)?
Thanks a lot for last reply~ I found the relative codes in demo_kinoptic_gen_ptcloud.m and demo_kinoptic_projection.m.
Dear @jhugestar,
Thanks for your great contribution with this dataset! I figured out how to do the calibration of 3D pose to kinect images. Yet I observe some delay when projecting the 3D keypoints to the Kinect color and depth images.
The way I select the 3D points data is
skel_json_frame= sprintf('%s/body3DScene_%08d.json', hd_skel_json_path, hd_index);
where hd_index goes in the list hd_index_list = hd_index_list+2 as in the file demo_kinoptic_gen_ptcloud.m
Here are some images of the results I get when projecting the 3D points on the color image

and on the depth image

Could you help me by providing a hit how to align more accurately such data?
Thanks in advance
Hi @legan78,
Kinects are not perfectly synchronized by themselves and also with other sensors, since they frame rates are varying. So in fast motion, you may see this misalignment. In our code, you can actually obtain the time differences between the skeleton (hd time) and the depth map (each kinect's capture time). So you may use this time interval to find better alignment (e.g., interpolating neighboring skeletons to generate the skeletons at the kinect time)
Hi @jhugestar and @Aki57 ,
Many thanks for your reply. I manage to make it more accurately by playing with hd_index when loading
the skeleton frame. Specifically I subtracted -2 when loading the skeleton json frame in the line
skel_json_frame= sprintf('%s/body3DScene_%08d.json', hd_skel_json_path, hd_index-2);
By doing that I get the following result

Which looks more accurate than the previous result. Here I share you the code I use to extract the skeleton for each image.
` warning off; root_path='Panoptic/panoptic-toolbox'; seqName= '171204_pose5'; hd_index_list= 400:420; % Target frames you want to export ply files
%Relative Paths
kinectImgDir = sprintf('%s/%s/kinectImgs',root_path,seqName);
kinectDepthDir = sprintf('%s/%s/kinect_shared_depth',root_path,seqName);
calibFileName = sprintf('%s/%s/kcalibration_%s.json',root_path,seqName,seqName);
syncTableFileName = sprintf('%s/%s/ksynctables_%s.json',root_path,seqName,seqName);
panopcalibFileName = sprintf('%s/%s/calibration_%s.json',root_path,seqName,seqName);
panopSyncTableFileName = sprintf('%s/%s/synctables_%s.json',root_path,seqName,seqName);
% Output folder Path %Change the following if you want to save outputs on another folder plyOutputDir=sprintf('%s/%s/kinoptic_ptclouds',root_path,seqName); mkdir(plyOutputDir); disp(sprintf('PLY files will be saved in: %s',plyOutputDir));
%Other parameters bVisOutput = 1; %Turn on, if you want to visualize what's going on bRemoveFloor= 1; %Turn on, if you want to remove points from floor floorHeightThreshold = 0.5; % Adjust this (0.5cm ~ 7cm), if floor points are not succesfully removed % Icreasing this may remove feet of people bRemoveWalls = 1; %Turn on, if you want to remove points from dome surface
addpath('jsonlab'); addpath('kinoptic-tools');
%% Load syncTables ksync = loadjson(syncTableFileName); knames = {}; for id=1:10; knames{id} = sprintf('KINECTNODE%d', id); end
psync = loadjson(panopSyncTableFileName); %%Panoptic Sync Tables
%% Load Kinect Calibration File kinect_calibration = loadjson(calibFileName);
panoptic_calibration = loadjson(panopcalibFileName); panoptic_camNames = cellfun( @(X) X.name, panoptic_calibration.cameras, 'uni', false ); %To search the targetCam
hd_index_list = hd_index_list+2; %This is the output frame (-2 is some weired offset in synctables)
%% dict={};
%for idk = 1:10 %Iterating 10 kinects. Change this if you want a subpart for idk = 2:2 %Iterating 10 kinects. Change this if you want a subpart jsonDict=[]; f=1;
for hd_index = hd_index_list
hd_index_afterOffest = hd_index-2; %This is the output frame (-2 is some weired offset in synctables)
%out_fileName = sprintf('%s/ptcloud_hd%08d.ply', plyOutputDir, hd_index_afterOffest);
%% Compute Universal time
selUnivTime = psync.hd.univ_time(hd_index);
fprintf('hd_index: %d, UnivTime: %.3f\n', hd_index, selUnivTime)
%% Main Iteration
all_point3d_panopticWorld = []; %point cloud output from all kinects
all_colorsv = []; %colors for point cloud
%% Select corresponding frame index rgb and depth by selUnivTime
% Note that kinects are not perfectly synchronized (it's not possible),
% and we need to consider offset from the selcUnivTime
[time_distc, cindex] = min( abs( selUnivTime - (ksync.kinect.color.(knames{idk}).univ_time-6.25) ) ); %cindex: 1 based
ksync.kinect.color.(knames{idk}).univ_time(cindex);
% assert(time_dist<30);
[time_distd, dindex] = min( abs( selUnivTime - ksync.kinect.depth.(knames{idk}).univ_time ) ); %dindex: 1 based
% Filtering if current kinect data is far from the selected time
fprintf('idk: %d, %.4f\n', idk, selUnivTime - ksync.kinect.depth.(knames{idk}).univ_time(dindex));
if abs(ksync.kinect.depth.(knames{idk}).univ_time(dindex) - ksync.kinect.color.(knames{idk}).univ_time(cindex))>6.5
fprintf('Skipping %d, depth-color diff %.3f\n', abs(ksync.kinect.depth.(knames{idk}).univ_time(dindex) - ksync.kinect.color.(knames{idk}).univ_time(cindex)));
continue;
end
% assert(time_dist<30);
% time_distd
if time_distc>30 || time_distd>17
fprintf('Skipping %d\n', idk);
[time_distc, time_distd];
continue;
end
% Extract image and depth
%rgbim_raw = kdata.vobj{idk}.readIndex(cindex); % cindex: 1 based
rgbFileName = sprintf('%s/50_%02d/50_%02d_%08d.jpg',kinectImgDir,idk,idk,cindex);
depthFileName = sprintf('%s/KINECTNODE%d/depthdata.dat',kinectDepthDir,idk);
rgbim = imread(rgbFileName); % cindex: 1 based
%depthim_raw = kdata.vobj{idk}.readDepthIndex(dindex); % cindex: 1 based
depthim = readDepthIndex_1basedIdx(depthFileName,dindex); % cindex: 1 based
%Check valid pixels
validMask = depthim~=0; %Check non-valid depth pixels (which have 0)
nonValidPixIdx = find(validMask(:)==0);
validPixIdx = find(validMask(:)==1);
%% Back project depth to 3D points (in camera coordinate)
camCalibData = kinect_calibration.sensors{idk};
% point3d (N x 3): 3D point cloud from depth map in the depth camera coordinate
% point2d_color (N x 2): 2D points projected on the rgb image space
% Where N is the number of pixels of depth image (512*424)
[point3d, point2d_incolor] = unprojectDepth_release(depthim, camCalibData, true);
nonValidPixIdx = find(validMask(:)==0);
validPixIdx = find(validMask(:)==1);
point3d(nonValidPixIdx,:) = nan;
point2d_incolor(nonValidPixIdx,:) = nan;
depthcam.K=camCalibData.K_depth;
depthcam.R=camCalibData.M_depth(1:3,1:3);
depthcam.t=camCalibData.M_depth(1:3,4);
depthcam.distCoef=camCalibData.distCoeffs_depth;
depthpoints=PoseProject2D(point3d, depthcam, true);
depth=zeros(424,512);
for dd=1:(424*512)
if depthpoints(dd,1) ~= nan
x= round(depthpoints(dd,1));
y= round(depthpoints(dd,2));
d= point3d(dd,3);
if x>0 && x<=512 && y>0 && y<=424
depth(y, x) = d;
end
end
end
newDepth= depth*255/8.0;
newDepth= uint8(newDepth);
figure; imshow(newDepth); title('MY NEW DEPTH IMAGE');
dir=sprintf('/%s/%s/extracted_depth_imgs/50_%02d/', root_path, seqName, idk);
mkdir(dir);
depthImgName=sprintf('/%s/%s/extracted_depth_imgs/50_%02d/depth_%02d_%012d.mat', root_path, seqName, idk, idk, dindex);
save(depthImgName, 'depth');
frameInfo.frame_id =dindex;
frameInfo.depth_index =dindex;
frameInfo.color_frame_name=sprintf('50_%02d_%08d.jpg',idk,cindex);
frameInfo.img_paths =depthImgName;
%% Filtering based on the distance from the dome center
domeCenter_kinectlocal = camCalibData.domeCenter;
%% Project 3D points (from depth) to color image
colors_inDepth = multiChannelInterp( double(rgbim)/255, ...
point2d_incolor(:,1)+1, point2d_incolor(:,2)+1, 'linear');
colors_inDepth = reshape(colors_inDepth, [size(depthim,1), size(depthim,2), 3]);
colorsv = reshape(colors_inDepth, [], 3);
% valid_mask = depthim~=0;
validMask = validMask(:) & ~isnan(point3d(:,1));
validMask = validMask(:) & ~isnan(colorsv(:,1));
%nonValidPixIdx = find(validMask(:)==0);
validPixIdx = find(validMask(:)==1);
%% Transform Kinect Local to Panoptic World
% Kinect local coordinate is defined by depth camera coordinate
panoptic_calibData = panoptic_calibration.cameras{find(strcmp(panoptic_camNames, sprintf('50_%02d', idk)))};
M = [panoptic_calibData.R, panoptic_calibData.t];
T_panopticWorld2KinectColor = [M; [0 0 0 1]]; %Panoptic_world to Kinect_color
T_kinectColor2PanopticWorld = inv(T_panopticWorld2KinectColor);
scale_kinoptic2panoptic = eye(4);
scaleFactor = 100;%0.01; %centimeter to meter
scale_kinoptic2panoptic(1:3,1:3) = scaleFactor*scale_kinoptic2panoptic(1:3,1:3);
%T_kinectColor2KinectLocal = [calib_rgbCam.Mdist;[0 0 0 1]]; %Color2Depth camera coordinate
T_kinectColor2KinectLocal = camCalibData.M_color;%[camCalibData.M_color;[0 0 0 1]]; %Color2Depth camera coordinate
T_kinectLocal2KinectColor = inv(T_kinectColor2KinectLocal);
T_kinectLocal2PanopticWorld = T_kinectColor2PanopticWorld* scale_kinoptic2panoptic* T_kinectLocal2KinectColor;
%% ANMG lets try to display the 3d keypoints onto the generated image
% '''
% ## Reproject 3D Body Keypoint onto the first HD camera
% '''
hd_skel_json_path= sprintf('/%s/%s/hdPose3d_stage1_coco19',root_path,seqName);
% # Edges between joints in the body skeleton
body_edges = [[1,2];[1,4];[4,5];[5,6];[1,3];[3,7];[7,8];[8,9];[3,13];[13,14];[14,15];[1,10];[10,11];[11,12]];
skel_json_frame= sprintf('%s/body3DScene_%08d.json', hd_skel_json_path, hd_index-2);
bframe= loadjson(skel_json_frame);
nbodies= size(bframe.bodies);
keypoints3d_in_world={};
keypoints2d_in_color={};
keypoints2d_in_depth={};
for ib=1:nbodies(1)
for jb=1:nbodies(2)
body=bframe.bodies{ib,jb};
joints= body.joints19;
joints= reshape(joints, [4, 19])';
keypoints3d_in_world{jb}=joints;
% transform joins into color camera coordinate system
joints_kinect_color= T_panopticWorld2KinectColor* [joints(:,1:3), ones(19,1)]';
% transform joint into depth camera coordinate system
joints_kinect_local= T_kinectColor2KinectLocal*joints_kinect_color;
%rgbcam.R = camCalibData.M_color(1:3,1:3);
%rgbcam.t = camCalibData.M_color(1:3,4);
rgbcam.K = camCalibData.K_color;
%rgbcam.distCoef = camCalibData.distCoeffs_color;
depthcam.R = camCalibData.M_depth(1:3,1:3);
depthcam.t = camCalibData.M_depth(1:3,4);
depthcam.K = camCalibData.K_depth;
depthcam.distCoef = camCalibData.distCoeffs_depth;
%[joints2d] = PoseProject2D(joints_kinect_color(1:3,:)', rgbcam, true);
joints2d = rgbcam.K*joints_kinect_color(1:3,:);
joints2d = (joints2d./joints2d(3,:))';
%joints2d_depth= depthcam.K*joints_kinect_local(1:3,:);
%joints2d_depth= (joints2d_depth./joints2d_depth(3,:))';
joints2d_depth= PoseProject2D(joints_kinect_local(1:3,:)', depthcam, true);
keypoints2d_in_color{jb}= joints2d;
keypoints2d_in_depth{jb}= joints2d_depth;
idx = find(joints2d(:,1)<0 | joints2d(:,2)<0 | joints2d(:,1)>1920 | joints2d(:,2)>1080 );
joints2d(idx,:) = [];
hold on; plot(joints2d_depth(:,1), joints2d_depth(:,2),'*', 'color', 'red', 'linewidth', 5);
axis equal;
end
end
hold off
frameInfo.keypoints3d_in_world= keypoints3d_in_world;
frameInfo.keypoints2d_in_color= keypoints2d_in_color;
frameInfo.keypoints2d_in_depth= keypoints2d_in_depth;
jsonDict{f}= frameInfo;
f=f+1;
end
outJsonName=sprintf('/%s/%s/extracted_depth_imgs/50_%02d/annotations_50_%02d.json', root_path, seqName, idk, idk);
savejson('', jsonDict, outJsonName);
end`
@legan78 Why is there horizontal offset when 3D key points are projected to the depth coordinate system
@xf0515 hello, did your problem like my image below? and I wonder that why "K_color"/"distCoeffs_color" in kcalibration_171204_pose1.json is different from "K"/"distCoef" in calibration_171204_pose1.json, what mean of these?

亲爱的 ,
感谢您对这个数据集的巨大贡献!我想出了如何根据 kinect 图像校准 3D 姿势。然而,在将 3D 关键点投影到 Kinect 颜色和深度图像时,我观察到一些延迟。
我选择 3D 点数据的方式是
skel_json_frame= sprintf('%s/body3DScene_%08d.json', hd_skel_json_path, hd_index);在列表中的位置与在文件中的位置相同
hd_index``hd_index_list = hd_index_list+2``demo_kinoptic_gen_ptcloud.m以下是我在彩色图像上投影 3D 点时获得的一些结果图像
和深度图像
您能否通过提供如何更准确地对齐此类数据来帮助我?
提前致谢May I ask how your depth image was obtained and is it in the downloaded file? What is the name or does it need to be extracted?