librealsense icon indicating copy to clipboard operation
librealsense copied to clipboard

Converting numpy depth_map back into pyrealsense2 depth frame

Open LMurphy99 opened this issue 5 years ago • 17 comments

Required Info
Camera Model {D435i}
Firmware Version (N/A)
Operating System & Version {Win (10)
Kernel Version (Linux Only) (N/A)
Platform PC
SDK Version {N/A }
Language {python }
Segment {N/A}

Issue Description

I have been working on pre-processing depth maps before I convert them to pointclouds. This involved converting the depth_frame.get_data() into a numpy array, and doing some processing (i.e, converting all pixels of no interest to nan). I wanted to use the rs.pointcloud().calculate() function to calculate my point cloud, with my altered depth map as input. I'm getting the following error however..

"TypeError: calculate(): incompatible function arguments. The following argument types are supported: 1. (self: pyrealsense2.pyrealsense2.pointcloud, depth: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.points"

i was wondering how I would go about converting my numpy depth map into a format that the function can accept.

Attached is the code of note Capture

LMurphy99 avatar Feb 04 '20 22:02 LMurphy99

following

zapaishchykova avatar Apr 05 '20 16:04 zapaishchykova

Interested in an answer too !

rcroset avatar Apr 14 '20 14:04 rcroset

Hi, I also try to convert back the numpy array to BufData in order to reuse the modified data. I tried casting, memoryview, creating new BufData objects but without any success. Is there any possibility to recreate BufData object? The environment returns an error, that there is no defined constructor.

r-ratajczak avatar Apr 20 '20 13:04 r-ratajczak


@LMurphy99 Sorry for late response. Have submitted your requirement to engineering team. Any update will let you know. Thanks!

following

jts3k avatar Aug 08 '20 19:08 jts3k

Are there any updates on this? Thanks!

adityashrm21 avatar Oct 23 '20 03:10 adityashrm21

@adityashrm21 Sorry, this enhancement is still under working. Any update will let you know. Thanks!

RealSenseSupport avatar Oct 27 '20 02:10 RealSenseSupport

following

tsaizhenling avatar Oct 29 '20 16:10 tsaizhenling

Hi everyone,

The BufData object creation question has been answered by a RealSense team member in the link below.

https://github.com/IntelRealSense/librealsense/issues/8394#issuecomment-782811459

This case will remain open though due to the ongoing Enhancement feature request associated with it that is related to the original question.

MartyG-RealSense avatar Feb 22 '21 08:02 MartyG-RealSense

@MartyG-RealSense are there any updates to this? thanks

kailkuhn avatar Mar 07 '21 18:03 kailkuhn

Hi @kailkuhn If you are asking about the official feature request for numpy depth map conversion, there are no updates to report.

In regard to exploration of numpy conversion by RealSense users, the link below may provide you with some further useful insights.

https://github.com/IntelRealSense/librealsense/issues/2551

MartyG-RealSense avatar Mar 07 '21 20:03 MartyG-RealSense

Hi All,

Any updates on this feature to convert the numpy array back to real sense depth frames to utilize the inbuilt functions of pyrealsense2?

Thanks

DeepakChandra2612 avatar Nov 03 '21 19:11 DeepakChandra2612

Hi @DeepakChandra2612 There is no further information to report about the official feature request to Intel, though it remains active. I am not aware of any further progress that RealSense users have made themselves on developing numpy to rs2::frame conversion either since the https://github.com/IntelRealSense/librealsense/issues/2551 link already mentioned.

MartyG-RealSense avatar Nov 03 '21 19:11 MartyG-RealSense

I am trying to get a region of the depth map and convert it into point cloud. To do so, I have cropped the depth map to get the ROI I wanted and in doing so, the depth frame was converted into a numpy array. I want to convert this numpy array back to depth frame format so that I can extract point clouds from the cropped region. How to do that? Here is the code:

pipeline = rs.pipeline() config = rs.config() colorizer = rs.colorizer() dec_filter = rs.decimation_filter () # Decimation - reduces depth frame density spat_filter = rs.spatial_filter() # Spatial - edge-preserving spatial smoothing temp_filter = rs.temporal_filter()

pipeline_wrapper = rs.pipeline_wrapper(pipeline) pipeline_profile = config.resolve(pipeline_wrapper) device = pipeline_profile.get_device() device_product_line = str(device.get_info(rs.camera_info.product_line))

found_rgb = False for s in device.sensors: if s.get_info(rs.camera_info.name) == 'RGB Camera': found_rgb = True break if not found_rgb: print("The demo requires Depth camera with Color sensor") exit(0)

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

if device_product_line == 'L500': config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30) else: config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

profile = pipeline.start(config) pc = rs.pointcloud() points = rs.points() depth_sensor = profile.get_device().first_depth_sensor() depth_scale = depth_sensor.get_depth_scale()

clipping_distance_in_meters = 1 #1 meter clipping_distance = clipping_distance_in_meters / depth_scale intrinsics_depth = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_intrinsics() intrinsics_color = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()

pcd_depth_object = o3d.geometry.PointCloud() try: # Wait for a coherent pair of frames: depth and color frames = pipeline.wait_for_frames() colorized = colorizer.process(frames)

#aligning the rgb and the depth frames
align = rs.align(rs.stream.color)
frames = align.process(frames)  


#Applying hole filter
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
'''
depth_frame = dec_filter.process(depth_frame)
depth_frame = spat_filter.process(depth_frame)
depth_frame = temp_filter.process(depth_frame)'''

# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())

#Creating a binary mask of the same size as depth map for cropping
mask_image = np.zeros_like(depth_image, dtype=np.uint8)

#Obect ID for filtering the desired classes for detection
desired_class_ids=[0]

#Segmentation
results=model.predict(source=color_image,stream=True)
for person in results:
    indexes = [index for index, item in enumerate(person.boxes.cls) if item in desired_class_ids]
    mask=person.masks[indexes].xy #getting mask cordinates for each object detected in each frame
    #print(mask)
    for i in range(len(mask)): #iterating through each mask 
        
        act_mask = mask[i].tolist()  # Assuming the mask data is a list of coordinates
        
        # Convert the mask data into a list of contours
        mask_contours = [np.array(act_mask, dtype=np.int32)]

        cv2.fillPoly(mask_image, mask_contours, color=255) #Filling the mask_image with white pixels in place of mask coordinates
        #cv2.fillPoly(color_image, mask_contours, color=(0, 0, 255))


# Apply colormap on depth image (image must be converted to 8-bit per pixel first)

depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.05), cv2.COLORMAP_JET)
depth_object = cv2.bitwise_and(depth_image, depth_image, mask=mask_image).

Here, depth_object is the cropped depth frame which is in numpy array format. I need to convert it into realsense depth frame format for extracting point clouds from it.

sanjaiiv04 avatar Nov 20 '23 06:11 sanjaiiv04

Hi @sanjaiiv04 Converting numpy to rs2::frame remains an unsolved problem where there is no further advice available, unfortunately.

MartyG-RealSense avatar Nov 20 '23 10:11 MartyG-RealSense

depth_image

It's mind-blowing that 4 years later you haven't added support for this, unbelievable

charlielito avatar Apr 18 '24 19:04 charlielito

I am waiting for the answer in 2024....

mxshen avatar Jun 18 '24 09:06 mxshen