PyKinect2-Mapper-Functions icon indicating copy to clipboard operation
PyKinect2-Mapper-Functions copied to clipboard

mapper.py is a slow in converting between color space to depth space

Open akash02ita opened this issue 2 years ago • 18 comments

I am using python 3.7 64bit. i did not do pip install since it seems to not work in that way. rather using the given files directly via import.

However the issue is that running mapper.py is very slow. 0.5 fps is the frame rate. I think waitKey(3000) is the reason. But that still does not fix the issue:

When i set show = False that improves but still slow and not smooth 25/30fps. Rather somewhere around 8/10fps.

Any solution to this issue? When i do not call depth_2_color_space the speed is just back to 25/30fps i can say.

akash02ita avatar Nov 23 '22 02:11 akash02ita

if i use img = color_2_depth_space(kinect, _ColorSpacePoint, kinect._depth_frame_data, show=False, return_aligned_image=True) i still see 30fps. could the issue be due to resolution?

akash02ita avatar Nov 23 '22 02:11 akash02ita

Unfortunately, I do not have a Kinect Device anymore to test it, but I do believe that the reason the function is slow is that I perform some operations that are only needed when you pass the show = True flag. The only reason I have put that flag is to test how the aligned image shows, but it slows down the performance by a lot. I suggest no to use the show = True, and only call this function to get the points and then decide on how you want to display them.

I will push some changes in a moment that make some computations happen only when show is True.

KonstantinosAng avatar Nov 26 '22 10:11 KonstantinosAng

Can you pull the changes and try again to see if you can see a drop in fps, but do not pass the show flag?

KonstantinosAng avatar Nov 26 '22 10:11 KonstantinosAng

hello i use this code below but terminal has a problem " "depth_2_color_space" is not defined" although i did import mapper library.

import mapper from pykinect2 import PyKinectV2 from pykinect2.PyKinectV2 import * from pykinect2 import PyKinectRuntime import cv2 import numpy as np

if name == 'main': kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color)

while True:
    if kinect.has_new_depth_frame():
        color_frame = kinect.get_last_color_frame()
        colorImage = color_frame.reshape((kinect.color_frame_desc.Height, kinect.color_frame_desc.Width, 4)).astype(np.uint8)
        colorImage = cv2.flip(colorImage, 1)
        cv2.imshow('Test Color View', cv2.resize(colorImage, (int(1920 / 2.5), int(1080 / 2.5))))
        depth_frame = kinect.get_last_depth_frame()
        depth_img = depth_frame.reshape((kinect.depth_frame_desc.Height, kinect.depth_frame_desc.Width)).astype(np.uint8)
        depth_img = cv2.flip(depth_img, 1)
        cv2.imshow('Test Depth View', depth_img)
        # print(color_point_2_depth_point(kinect, _DepthSpacePoint, kinect._depth_frame_data, [100, 100]))
        # print(depth_points_2_world_points(kinect, _DepthSpacePoint, [[100, 150], [200, 250]]))
        # print(intrinsics(kinect).FocalLengthX, intrinsics(kinect).FocalLengthY, intrinsics(kinect).PrincipalPointX, intrinsics(kinect).PrincipalPointY)
        # print(intrinsics(kinect).RadialDistortionFourthOrder, intrinsics(kinect).RadialDistortionSecondOrder, intrinsics(kinect).RadialDistortionSixthOrder)
        # print(world_point_2_depth(kinect, _CameraSpacePoint, [0.250, 0.325, 1]))
        # img = depth_2_color_space(kinect, _DepthSpacePoint, kinect._depth_frame_data, show=False, return_aligned_image=True)
        depth_2_color_space(kinect, _DepthSpacePoint, kinect._depth_frame_data, show=True)
        # img = color_2_depth_space(kinect, _ColorSpacePoint, kinect._depth_frame_data, show=True, return_aligned_image=True)

    # Quit using q
    if cv2.waitKey(1) & 0xff == ord('q'):
        break

cv2.destroyAllWindows()

dongtamlx18 avatar Jul 11 '23 14:07 dongtamlx18

try importing like this:

from mapper import *

KonstantinosAng avatar Jul 11 '23 18:07 KonstantinosAng

oh i have a problem with my code and i fixed it, thank you so much your source code is very useful. I wonder that if there is any source code about processing pointcloud by usung KinectV2, or something related about pointcloud using KinectV2. If you have this please share me, please. By the way, thank you so much!

dongtamlx18 avatar Jul 12 '23 09:07 dongtamlx18

@dongtamlx18 I have another repo that I use mapper to draw real time (30fps) point clouds with color and depth simultaneously:

https://github.com/KonstantinosAng/PyKinect2-PyQtGraph-PointClouds

KonstantinosAng avatar Jul 12 '23 10:07 KonstantinosAng

You are amazing but i have a problem when i test your code. I don't know why is this happen?

from PointCloud import Cloud

pcl = Cloud(file='models/model.pcd') (I did download model.pcd and placed it into right folder)

and information about the error: File "e:/2022 - Nam4 - HKII/ĐATN/PythonDA-newVS/RunTest.py", line 3, in pcl = Cloud(file='models/model.pcd') File "e:\2022 - Nam4 - HKII\ĐATN\PythonDA-newVS\PointCloud.py", line 116, in init self.visualize_file() File "e:\2022 - Nam4 - HKII\ĐATN\PythonDA-newVS\PointCloud.py", line 647, in visualize_file vis = o3d.Visualizer() # start visualizer AttributeError: module 'open3d' has no attribute 'Visualizer'

dongtamlx18 avatar Jul 12 '23 14:07 dongtamlx18

I have used this open3d version:

0.10.0.1

try:

pip uninstall open3d

and then:

pip install open3d==0.10.0.1

KonstantinosAng avatar Jul 12 '23 15:07 KonstantinosAng

i can use function pcl.visualize() for pcl = Cloud(file='models/test_cloud_4.txt'). Maybe i will try with your open3D version

dongtamlx18 avatar Jul 12 '23 15:07 dongtamlx18

Helllo sir, i scroll on Youtube and people said that Open3D lib does not support create pointcloud data, i do not know does it right or not so im here to ask you about that question. And could you help me to introduce me some technique that using KinectV2 to create pointcloud? Thank you so much! Btw, I did read your code Pointcloud repository but i did not know too much, so I ask you here. Thank you!

dongtamlx18 avatar Jul 24 '23 08:07 dongtamlx18

Excuse me, could I ask you a question? I noticed in the source code that you swapped the x, y, z coordinates from the 'world_points' numpy array to 'x, y, z' coordinates. I am not sure why you did that. Does it have any specific significance or meaning? Thank you in advance! [image: image.png]

Vào Th 4, 12 thg 7, 2023 vào lúc 22:09 CyberBoy @.***> đã viết:

I have used this open3d version:

0.10.0.1

try:

pip uninstall open3d

and then:

pip install open3d==0.10.0.1

— Reply to this email directly, view it on GitHub https://github.com/KonstantinosAng/PyKinect2-Mapper-Functions/issues/14#issuecomment-1632721275, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKW4B5W3AQOJ3YPL5WQF7PTXP24ZVANCNFSM6AAAAAASIO74HI . You are receiving this because you were mentioned.Message ID: @.*** com>

dongtamlx18 avatar Aug 03 '23 07:08 dongtamlx18

First of all, about the Open3d:

I do not know if you can create a Pointcloud with open3d, I only use it to visualize Pointcloud files (.ply, .pcd) that I create manually in my Repository. If you see my code you will see that I get the world point from Kinect and I manually create the file using the basic structure of the (.ply, .pcd) file format.

Second, about the x, y, z coordinates:

Where exactly do I swap the values because I cannot find the function ?

KonstantinosAng avatar Aug 03 '23 08:08 KonstantinosAng

Hi, thanks for your reply. The x, y, z coordinates are not created by a function, but in these lines of code below:

self._dynamic_point_cloud[:, 0] = world_points[:, 0] self._dynamic_point_cloud[:, 1] = world_points[:, 2] self._dynamic_point_cloud[:, 2] = world_points[:, 1]

I see that you changed them by using an array. Is it true, or did I misunderstand something? Thank you

Vào Th 5, 3 thg 8, 2023 vào lúc 15:00 CyberBoy @.***> đã viết:

First of all, about the Open3d:

I do not know if you can create a Pointcloud with open3d, I only use it to visualize Pointcloud files (.ply, .pcd) that I create manually in my Repository https://github.com/KonstantinosAng/PyKinect2-PyQtGraph-PointClouds. If you see my code you will see that I get the world point from Kinect and I manually create the file using the basic structure of the (.ply, .pcd) file format.

Second, about the x, y, z coordinates:

Where exactly do I swap the values because I cannot find the function ?

— Reply to this email directly, view it on GitHub https://github.com/KonstantinosAng/PyKinect2-Mapper-Functions/issues/14#issuecomment-1663476965, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKW4B5QXNZMVHJD2CHI53FDXTNLCPANCNFSM6AAAAAASIO74HI . You are receiving this because you were mentioned.Message ID: @.*** com>

dongtamlx18 avatar Aug 03 '23 09:08 dongtamlx18

I did this because the z coordinate for Kinect is the distance from Kinect to the object and the Y coordinate is the distance of the object from the ground to the Kinect. I always had the Z coordinate as the distance from the floor so I swapped the values to suit me better.

KonstantinosAng avatar Aug 03 '23 10:08 KonstantinosAng

Hey Thank you so much! I'm trying to estimate the normal vector of this white tilted object in the picture below. I don't know much about Open3D processing, could you give me some advice on this? Which preprocessing should I do before estimating the normal vector or some keywords to find information easier? Thank you! [image: image.png] And here is the ply file which created by your PointCloud source code:

Vào Th 5, 3 thg 8, 2023 vào lúc 17:42 CyberBoy @.***> đã viết:

I did this because the z coordinate for Kinect is the distance from Kinect to the object and the Y coordinate is the distance of the object from the ground to the Kinect. I always had the Z coordinate as the distance from the floor so I swapped the values to suit me better.

— Reply to this email directly, view it on GitHub https://github.com/KonstantinosAng/PyKinect2-Mapper-Functions/issues/14#issuecomment-1663754423, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKW4B5XOLBXSBZ4OSSJFLDTXTN6BHANCNFSM6AAAAAASIO74HI . You are receiving this because you were mentioned.Message ID: @.*** com>

dongtamlx18 avatar Aug 05 '23 03:08 dongtamlx18

I think for image processing it is better to use OpenCV. To compute the normal vector you have to find 3 points in the plane that you are looking for and to find the plane in the image can be difficult if the colors mix. Start looking for a way to first identify the tilted object in the image accurately.

KonstantinosAng avatar Aug 08 '23 07:08 KonstantinosAng

Thanks for the tip! kaka I've completed several preprocessing steps, including cropping the point cloud (which might be similar to a pass-through filter), plane segmentation, and outlier removal. Thanks for providing the point cloud source. Now, I'm moving on to the final step: estimating the normal vectors. Here is an image for reference.

Vào Th 3, 8 thg 8, 2023 vào lúc 14:32 CyberBoy @.***> đã viết:

I think for image processing it is better to use OpenCV. To compute the normal vector you have to find 3 points in the plane that you are looking for and to find the plane in the image can be difficult if the colors mix. Start looking for a way to first identify the tilted object in the image accurately.

— Reply to this email directly, view it on GitHub https://github.com/KonstantinosAng/PyKinect2-Mapper-Functions/issues/14#issuecomment-1669068474, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKW4B5QDE53VVVJXAEQBUQLXUHTQTANCNFSM6AAAAAASIO74HI . You are receiving this because you were mentioned.Message ID: @.*** com>

dongtamlx18 avatar Aug 11 '23 03:08 dongtamlx18