Mask3D icon indicating copy to clipboard operation
Mask3D copied to clipboard

Code for a single ply and discrepancies with the web demo

Open Pamogar opened this issue 2 years ago • 10 comments

Hello! Thank you for your excellent work, it is an excellent method in the field of instance segmentation.

I have been working on a demo code for the interference of a single ply file using the scrips provided. The idea behind it is to create a scannet dataset where we only have one pointcloud in the test split. This approach has some limitations that require preparations and modifications:

  • The pointcloud must only have the colors for the “segmentator” program to work.

  • For “scannet_preprocessing” and “main_instance_segmentation” to work on our files, we need to name them like scannet files: scene0001_00.ply for example. The “_vh_clean_2” suffix is added in the provided demo code.

  • We need to make sure that the path where we move the output of segmentator (line 9 of the provided code) is the same that the used in the preprocessing ( scannet_preprocessing.py line 178).

  • We need a certain structure in the demo folder in order to simulate a dataset. The folder needs the files “./Tasks/Benchmark/scannetv2_test.txt”, “color_mean_std.yaml”, “label_database.yaml”, “train_database.yaml”, “train_validation_database.yaml” and “validation_database.yaml”, (all generated with a full preprocess of the scannet dataset). And the folder train with at least the .npy files for scene 0270_00, 0270_02 and 0384_00.

  • The folder should look like this:

├── train
│   ├── 0270_00.npy
│   ├── 0270_02.npy
│   └── 0384_00.npy
├── Tasks
│   └── Benchmark
│       └── scannetv2_test.txt
├── color_mean_std.yaml
├── label_database.yaml
├── train_database.yaml
├── train_validation_database.yaml
├── validation_database.yaml
└── scene0001_00.ply

The code takes care of the modification of the txt file of the test split, and the generation and movement of the output of “segmentator”. Then uses the provided python scripts for the preprocessing of the test split of a scannet dataset. Lastly, it uses the provided python script main_instance_segmentation.py for the generation of the results. Obtaining the predictions of our pointcloud.

View code
# demo.py

import subprocess
import argparse
import os
import shutil


def segment(d_dir, scene):
    # Execute "segmentator" binary for the current ply file
    args = ("/home/admin2/PycharmProjects/Mask3d/Mask3D/datasets/preprocessing/segmentator",  # Path to the compiled segmentator binary
            str(d_dir + "/" + scene + ".ply"))
    popen = subprocess.Popen(args)
    popen.wait()

    # Rename the file and move it to the segments_test folder
    os.rename(d_dir + "/" + scene + ".0.010000.segs.json",
              d_dir + "/" + scene + "_vh_clean_2.0.010000.segs.json")
    shutil.move(d_dir + "/" + scene + "_vh_clean_2.0.010000.segs.json",
                "/home/admin2/Datasets/Mask3d_Processed/segments/" + scene + "_vh_clean_2.0.010000.segs.json")


def preprocess(d_dir):
    # Execute "scannet_preprocessing.py" for the ply file
    subprocess.run(["python", "/home/admin2/PycharmProjects/Mask3d/Mask3D/datasets/preprocessing/scannet_preprocessing.py",  # Path to scannet_preprocessing python file
                    "preprocess",
                    str("--data_dir="+d_dir),
                    str("--save_dir="+d_dir),
                    str("--git_repo="+d_dir),
                    '--modes=("test",)'])  # Only preprocess test split


def instance_segmentation(d_dir):
    # Execute "main_instance_segmentation.py"
    subprocess.run(["python", "/home/admin2/PycharmProjects/Mask3d/Mask3D/main_instance_segmentation.py",  # Path to main_instance_segmentation python file
                    'general.experiment_name="singleplytest_query_150_topk_300_dbscan_0.95"',
                    'general.project_name="scannet_eval"',
                    "general.checkpoint='checkpoints/scannet_benchmark.ckpt'",  # Path to checkpoint
                    "general.eval_on_segments=true",
                    "general.train_on_segments=true",
                    "general.train_mode=false",
                    "general.export=true",
                    "data.test_mode=test",  # Use test split
                    "data/datasets=demo",  # Copy of scannet.yaml with test_dataset.data_dir changed
                    "model.num_queries=150",
                    "general.topk_per_image=300",
                    "general.use_dbscan=true",
                    "general.dbscan_eps=0.95"])

def main():
    # Create the parser
    parser = argparse.ArgumentParser()
    # Add arguments
    parser.add_argument('--data_dir', type=str, required=True)
    parser.add_argument('--scene', type=str, required=True)

    # Parse the arguments
    args = parser.parse_args()
    data_dir = args.data_dir
    scene = args.scene

    # Update txt of the test split with the scene name
    f = open(data_dir + "/Tasks/Benchmark/scannetv2_test.txt", "w")
    f.write(scene + "\n")
    f.close()

    # Check if the folders exist, create them if not
    if not os.path.exists(data_dir + "/scans_test"):
        os.mkdir(data_dir + "/scans_test")
    if not os.path.exists(data_dir + "/scans_test/" + scene):
        os.mkdir(data_dir + "/scans_test/" + scene)

    # Rename file and move it to ./scans_test/<scene>/
    shutil.copy(data_dir + "/" + scene + ".ply", data_dir + "/scans_test/" + scene + "/" + scene + "_vh_clean_2.ply")

    # Call segment function. Runs segmentator on the file and moves the output to the segments_test folder
    segment(data_dir, scene)

    # Call preprocess function. Runs "scannet_preprocessing.py" only for test split
    preprocess(data_dir)

    # Call instance_segmentation. Runs "main_instance_segmentation.py" for test split
    instance_segmentation(data_dir)

if __name__ == "__main__":
    main()

However, the predictions obtained using the configurations for the scannet_benchmark and the scannet_benchmark checkpoint (ScanNetv2 test) are different from the predictions obtained via the web demo.

@JonasSchult In the paper it is stated that you filter out the predictions with a score lower than 0.8. I was wondering if there is more post processing filters or modifications to the predictions (such as aggrupation of overlapping clusters) and if the configurations and checkpoints used are the same as the scannet benchmark. In addition, I noticed some degree of down sampling of the ply files uploaded to the web, and was wondering how much is it down sampled or if there is any down sampling at all for the inference.

Here are some examples of the discrepancy between the predictions obtained with the code and the web demo:

Predictions demo.py Predictions web demo
scene1_demo scene1_web
scene4_demo scene4_web
scene8_demo scene8_web

As you can see, there is some scenes where the clusters are notably different and even some scenes where some of the instances are detected only in one of the predictions.

@CharlesCai123 @reynoldscem @claragarciamoll @Augustab @lintong-zhang @pablovela5620 I have noticed that you have expressed your interest in processing your own pointclouds. If you are interested, we could work on this collectively.

Thank you!

Pamogar avatar Feb 16 '23 13:02 Pamogar

Hi @Pamogar ,

Thanks for this script. I have one question. How did you convert the /processed/scannet/test/scene0000_01.npy file (file created after executing demo.py) to the .ply file for visualization? I am trying to convert it into .ply, but it is showing RuntimeErrror: Unable to cast Python instance to C++ type.

Thanks, Vishal

Vishal1711 avatar Feb 18 '23 00:02 Vishal1711

Hi @Vishal1711,

The .npy file is created during the preprocess and used for the instance segmentation.

For the visualizations I use the original .ply file and the .txt files generated at eval_output/instance_evaluation_singleplytest_query_150_topk_300_dbscan_0.95_0/decoder_-1/. Here you should have scene0000_01.txt and a folder pred_mask with more text files.

scene0000_01.txt has three parameters each row: the mask file for the instance, the id of the instance, and the score. The first step I do is loading the .txt file and reading the parameters of the first line. Then I load the mask file pred_mask/scene0000_01_0.txt and change the colors of the masked points. Repeat with all the instances that have a score above 0.8 and visualize the pointcloud with the colors of the detected instances changed.

Here is the code I use for this:

View code
#visualization.py

import open3d as o3d
import open3d.visualization.gui as gui
import numpy as np
import argparse
from datasets.scannet200.scannet200_constants import VALID_CLASS_IDS_20, CLASS_LABELS_20, SCANNET_COLOR_MAP_20
from datasets.scannet200.scannet200_constants import VALID_CLASS_IDS_200, CLASS_LABELS_200, SCANNET_COLOR_MAP_200
import random


def main():
    # Create the parser
    parser = argparse.ArgumentParser()
    # Add arguments
    parser.add_argument('--ply_path', type=str, required=True)
    parser.add_argument('--mask_dir', type=str, required=True)
    parser.add_argument('--scene_name', type=str, required=True)
    parser.add_argument('--ext', action="store_true")

    # Parse the arguments
    args = parser.parse_args()
    ply_path = args.ply_path
    mask_dir = args.mask_dir
    scene_name = args.scene_name

    # Load ply
    scene = o3d.io.read_point_cloud(ply_path)
    scene_mask = o3d.io.read_point_cloud(ply_path)

    # Read txt for the scene
    with open(mask_dir + "/" + scene_name + ".txt") as f:
        lines = f.readlines()

    # Split the lines into file, instance and score and get the label
    inst=[]

    for l in lines:
        file, inst_i, score = l.split()

        if float(score) < 0.8:
            #print("Score too low, skipping iteration\n")
            continue

        # Create array of instances and get label
        inst.append(inst_i)
        try:
            label = CLASS_LABELS_20[VALID_CLASS_IDS_20.index(int(inst_i))+2]
            print(label)
        except:
            print("Skipped " + inst_i)
            continue

        # Read the mask from the file
        with open(mask_dir + "/" + file) as f:
            mask = list(map(bool, (map(int, f.readlines()))))

        # if mask.count(1) < 100:
        #     continue

        # Apply the mask with a color
        colors = []
        inst_color = list(SCANNET_COLOR_MAP_20[VALID_CLASS_IDS_20[CLASS_LABELS_20.index(label)]])
        #inst_color = [random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1)]

        for i in range(len(scene_mask.points)):
            if mask[i]:
                colors.append([inst_color[0]/255., inst_color[1]/255., inst_color[2]/255.])
                #colors.append(inst_color)

            else:
                colors.append(scene_mask.colors[i])

        scene_mask.colors = o3d.utility.Vector3dVector(colors)

    # Visualize scene
    if args.ext:
        ev = o3d.visualization.ExternalVisualizer()
        ev.set(scene_mask)
    else:
        ev = o3d.visualization.Visualizer()
        ev.create_window()
        ev.add_geometry(scene_mask)
        ev.run()
        ev.destroy_window()


if __name__ == "__main__":
    main()

You can use it like this:

python visualization.py --ply_path=<path_to_original_ply> 
--mask_dir=eval_output/instance_evaluation_singleplytest_query_150_topk_300_dbscan_0.95_0/decoder_-1
--scene_name=scene0000_01

Pamogar avatar Feb 20 '23 09:02 Pamogar

Thanks @Pamogar ,

I generated this files “color_mean_std.yaml”, “train_database.yaml”, “train_validation_database.yaml” and “validation_database.yaml” using three train scene (0270_00, 0270_02 and 0384_00) and one test_scene (0000_01). Do I need to preprocess the whole scanetv2 dataset to simulate a dataset?

Vishal1711 avatar Feb 21 '23 19:02 Vishal1711

The database files should be okay, since you only preprocess the test split. I'm not so sure about the other files, the 'color_mean_std' and the 'label_database', but you can copy the results I got with the whole dataset:

color_mean_std.yaml

mean:
- 0.47793125906962
- 0.4303257521323044
- 0.3749598901421883
std:
- 0.2834475483823543
- 0.27566157565723015
- 0.27018971370874995

label_database.yaml

View file
0:
  color:
  - 0
  - 0
  - 0
  name: empty
  validation: false
1:
  color:
  - 174
  - 199
  - 232
  name: wall
  validation: true
2:
  color:
  - 152
  - 223
  - 138
  name: floor
  validation: true
3:
  color:
  - 31
  - 119
  - 180
  name: cabinet
  validation: true
4:
  color:
  - 255
  - 187
  - 120
  name: bed
  validation: true
5:
  color:
  - 188
  - 189
  - 34
  name: chair
  validation: true
6:
  color:
  - 140
  - 86
  - 75
  name: sofa
  validation: true
7:
  color:
  - 255
  - 152
  - 150
  name: table
  validation: true
8:
  color:
  - 214
  - 39
  - 40
  name: door
  validation: true
9:
  color:
  - 197
  - 176
  - 213
  name: window
  validation: true
10:
  color:
  - 148
  - 103
  - 189
  name: bookshelf
  validation: true
11:
  color:
  - 196
  - 156
  - 148
  name: picture
  validation: true
12:
  color:
  - 23
  - 190
  - 207
  name: counter
  validation: true
13:
  color:
  - 178
  - 76
  - 76
  name: blinds
  validation: false
14:
  color:
  - 247
  - 182
  - 210
  name: desk
  validation: true
15:
  color:
  - 66
  - 188
  - 102
  name: shelves
  validation: false
16:
  color:
  - 219
  - 219
  - 141
  name: curtain
  validation: true
17:
  color:
  - 140
  - 57
  - 197
  name: dresser
  validation: false
18:
  color:
  - 202
  - 185
  - 52
  name: pillow
  validation: false
19:
  color:
  - 51
  - 176
  - 203
  name: mirror
  validation: false
20:
  color:
  - 200
  - 54
  - 131
  name: floor_mat
  validation: false
21:
  color:
  - 92
  - 193
  - 61
  name: clothes
  validation: false
22:
  color:
  - 78
  - 71
  - 183
  name: ceiling
  validation: false
23:
  color:
  - 172
  - 114
  - 82
  name: books
  validation: false
24:
  color:
  - 255
  - 127
  - 14
  name: refridgerator
  validation: true
25:
  color:
  - 91
  - 163
  - 138
  name: television
  validation: false
26:
  color:
  - 153
  - 98
  - 156
  name: paper
  validation: false
27:
  color:
  - 140
  - 153
  - 101
  name: towel
  validation: false
28:
  color:
  - 158
  - 218
  - 229
  name: shower_curtain
  validation: true
29:
  color:
  - 100
  - 125
  - 154
  name: box
  validation: false
30:
  color:
  - 178
  - 127
  - 135
  name: whiteboard
  validation: false
31:
  color:
  - 120
  - 185
  - 128
  name: person
  validation: false
32:
  color:
  - 146
  - 111
  - 194
  name: night_stand
  validation: false
33:
  color:
  - 44
  - 160
  - 44
  name: toilet
  validation: true
34:
  color:
  - 112
  - 128
  - 144
  name: sink
  validation: true
35:
  color:
  - 96
  - 207
  - 209
  name: lamp
  validation: false
36:
  color:
  - 227
  - 119
  - 194
  name: bathtub
  validation: true
37:
  color:
  - 213
  - 92
  - 176
  name: bag
  validation: false
38:
  color:
  - 94
  - 106
  - 211
  name: otherstructure
  validation: false
39:
  color:
  - 82
  - 84
  - 163
  name: otherfurniture
  validation: true
40:
  color:
  - 100
  - 85
  - 144
  name: otherprop
  validation: false

Hope this helps! :)

Pamogar avatar Feb 22 '23 08:02 Pamogar

Hi @Pamogar ,

Thanks for the demo script and detailed explanation. It took me around 50 seconds to test the inference on a single test scene. When I tried to use multiple GPUs for inference, I faced errors. The author mentioned that https://github.com/JonasSchult/Mask3D/issues/35#issuecomment-1368026455 code is not tested for multiple GPUs. But still, do you know any way we can use multiple GPUs for inference?

image

Thanks, Vishal

Vishal1711 avatar Feb 23 '23 00:02 Vishal1711

Hi @Vishal1711,

I'm glad to hear that you were able to get it working. However, I've only been using one GPU and haven't figured out how to make it work with multiple ones yet.

Pamogar avatar Feb 23 '23 12:02 Pamogar

Many thanks for your demo.py. It really helps me.

CQUlds avatar Apr 20 '23 09:04 CQUlds

Now that individual ply inference is possible, how can I infer my own data?

CQUlds avatar Apr 20 '23 13:04 CQUlds

Hi @CQUlds,

You can convert your data into raw_data format and keep that in /data/raw/dataset_name/test. The raw_data format depends on which dataset you used earlier: ScanNet, S3DIS and STPLS3D. Using preprocessing script, convert that data into processed_data. You can infer your data by running script under the scripts folder in test mode.

Vishal1711 avatar Apr 20 '23 17:04 Vishal1711

Hi @JonasSchult and thanks a lot for the code you did an amazing job. Thanks also to @Pamogar for this demo was really useful for me. I wanted to ask about the multiprocessing with more than one GPU, does anyone achieve it? Because I am traying the training and even if I reduce the data.voxel_size to 0.05 I had a problem to allocate the memory on cuda. I have two GPUS Nvidia GeForce RTX 2080 Ti, and only using 1 with 10 GB memory is not enough, it would be ideal to be able to work with the 2.

Also I wanted to point out a small error I think I found in the visualization.py you shared in this issue @Pamogar. Correct me if I am wrong but in my case in order to assign the labels correctly to the instances I had to change this line of code (line 86):

label = CLASS_LABELS_20[VALID_CLASS_IDS_20.index(int(inst_i))+2]

to:

label = CLASS_LABELS_20[VALID_CLASS_IDS_20.index(int(inst_i))]

In my case removing the +2 solved the problem because the alghorithm was assigning an incorrect label (exactly the 2nd after the correct one in the CLASS_LABELS).

lluisb3 avatar Oct 18 '23 08:10 lluisb3