mtk380

Results 7 issues of mtk380

I have tried to debug the test code, but i don't know where 'pl7tUx1TUTM' image comes from? The test datasets are all in capital letters.

when i train the script, this problem occur when running the fid_evaluation.setup_evaluation (Progress to next stage: 2%|▊ | 5000/200000 ) ``` Traceback (most recent call last): File "/home/ubuntu541/anaconda3/envs/pigan/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59,...

The code is wonderful. But I have some trouble generating per scene semantic img from this code. `o3d_mesh_canonical_clean.vertex_colors = o3d.utility.Vector3dVector(v_colors/255.0)` Could you please give me some advice?

As mentioned above. Such as `# hard code sfm depth padding scene_name = self.root_dir.rsplit('/')[-1] if scene_name == 'brandenburg_gate': sfm_path = '../neuralsfm' depth_percent = 0.2 elif scene_name == 'palacio_de_bellas_artes': sfm_path =...

Maybe the data is converted to bin files for faster dataloader, but I am confused about how to visualize the data such as the figure in your paper.

Why the code use torch.bmm or minus for Rt and Rt[:, origin_frame : origin_frame + 1]. To make the value smaller?

Thanks for your great work! I'm interested in the reference time for one image. If I change the color in Real-time, how much should the maximum resolution be?