habitat-sim
habitat-sim copied to clipboard
Having problem with habitat sim image extractor when using my own glb model
❓ Questions and Help
Hi, I'm using the basic example in the Image Extractor tutorial to extract static images from my own glb model. I'm referring to the following script:
######################################### ######################################### import numpy as np import matplotlib.pyplot as plt
from habitat_sim.utils.data import ImageExtractor
For viewing the extractor output
def display_sample(sample): img = sample["rgba"] depth = sample["depth"] semantic = sample["semantic"]
arr = [img, depth, semantic]
titles = ["rgba", "depth", "semantic"]
plt.figure(figsize=(12, 8))
for i, data in enumerate(arr):
ax = plt.subplot(1, 3, i + 1)
ax.axis("off")
ax.set_title(titles[i])
plt.imshow(data)
plt.show()
scene_filepath = "data/scene_datasets/habitat-test-scenes/apartment_1.glb"
extractor = ImageExtractor( scene_filepath, img_size=(512, 512), output=["rgba", "depth", "semantic"], )
Use the list of train outputs instead of the default, which is the full list
of outputs (test + train)
extractor.set_mode('train')
Index in to the extractor like a normal python list
sample = extractor[0]
Or use slicing
samples = extractor[1:4] for sample in samples: display_sample(sample)
Close the extractor so we can instantiate another one later
(see close method for detailed explanation)
extractor.close() ######################################## ########################################
where I replace the scene_filepath with the path to my own glb file.
I have created my glb model using the Open3d reconstruction pipeline and when I view it in online gltf viewers I can see the colors. But the corresponding rgb images extracted from my model by the above script is totally black and I don't know how/where rgb information is getting lost in the process.
Thanks for your help!
Habitat assumes GLB meshes are +Z up. If yours aren't that would be an issue. Also make sure you generate a navmesh for the mesh. If you didn't, you can compute one on the fly with
sim.recompute_navmesh(sim.pathfinder, habitat_sim.nav.NavMeshSettings())
Thanks Erik for addressing my issue. I have double checked and my GLB meshes are +Z. I have also precomputed navmesh using data tool:
build/utils/datatool/datatool create_navmesh scene.glb scene.navmesh
But my problem persists. I suspect that it is more of a visualization problem, because when I run
python -u habitat_baselines/run.py
with a config file that refers to my GLB model, everything works well and I'm able to train an agent. Hence, I suspect that the issue is related to Habitat sim "ImageExtractor" class not being able to read rgb information. Here is a file where I have put rgb and depth images extracted from my GLB file. The rgb image is totally black!
I meet a similar problem when using Gibson datasets. Most RGB images in Gibson datasets are black. Only a few datasets can be used, such as Eudora.glb. Have you solved this problem?
hi, I meet the same question. I create my own .ply file using open3d, and generate the rgb data with habitat-sim. the rgb images is almost black. however, change the '*.ply' file with "*semantic.ply", it works normally. It seems that habitat has different process with "*semantic.ply".(ex. the "*semantic.ply" of replica dataset also works well)
Adding semantic.ply
may be resulting in instance mesh treatment. Likely this is allowing the use of per-vertex color embedding rather than a material. If this is still relevant, I suggest opening new issue specifically for that purpose.
The original question relates to differences between ImageExtractor renderings and habitat-lab renderings for training. @mpiseno may have some thoughts about this.
I got this similar issue. Does anyone fixed it?