articulated-object-nerf
articulated-object-nerf copied to clipboard
Data generation for articulation training
Could you elaborate a bit on how you save the segmentation mask (in what format)?
For example, given image [h, w, 3], N parts of the object, do you save it as [h, w, 1] with integer value (1...N) as part labels?
Of course, if you save something that can be obtained directly from the Sapien library, it would be better to provide a code snippet.
Correct it is [h,w,3] with integer values for every pixel saved as labels. Here is the code for your reference:
seg_labels = camera.get_uint32_texture('Segmentation') # [H, W, 4]
colormap = sorted(set(ImageColor.colormap.values()))
color_palette = np.array([ImageColor.getrgb(color) for color in colormap], dtype=np.uint8)
label1_image = seg_labels[..., 1].astype(np.uint8) # actor-level
seg_save_dir = os.path.join(render_dir, 'seg')
os.makedirs(seg_save_dir, exist_ok=True)
seg_save_path = os.path.join(seg_save_dir, 'r_'+str(i)+'.png')
label1_pil = Image.fromarray(label1_image)
label1_pil.save(seg_save_path)