openMVS
openMVS copied to clipboard
scalable pipeline can't effectively divide scene
DensifyPointCloud
successfully completed on 96 images
with 60522392 points
.
I only need I small area. I can't really mask, because each image is already on the main subject as much as possible, and there's no obvious hook for the mask, except depth! The scene is all one color.
I read about possibly being able to pass a bounding box to ReconstructMesh
, but I'm not sure if --roi-border
or --crop-to-roi
can do this.
I tried the scalable pipeline, and split the scene into 6 sections.
scene_0000.mvs
gave Densifying point-cloud completed: 3779400 points
scene_0001.mvs
gave Densifying point-cloud completed: 804437 points
scene_0002.mvs
gave Densifying point-cloud completed: 14068772 points
scene_0003.mvs
gave Densifying point-cloud completed: 57780712 points
scene_0004.mvs
gave Densifying point-cloud completed: 8417252 points
scene_0005.mvs
gave Densifying point-cloud completed: 104060 points
scene_0002_dense.mvs
has half my ROI, and scene_0003_dense.mvs
has all of it, but the cloud is almost as big as the original!
Is there a way to control the ROI a little bit more? Really just for making the ReconstructMesh
step less computationally intensive.
I suppose I could try subdividing the scene more, but really in this case, every camera view is relevant to the main subject. the scalable pipeline seems more for datasets where only certain camera views are relevant to certain parts of the scene.
Eventually I completed the regular pipeline for this dataset, but ReconstructMesh
took very long.
Densifying point-cloud completed: 60522392 points (1h12m18s457ms)
Mesh reconstruction completed: 9002496 vertices, 17930746 faces (1d10h38m22s504ms)
Now I am using RefineMesh
with a trimmed mesh, about 2m faces, is called with the --mesh-file
option, which is working nicely.
can u pls share the data? both the MVS scene, and the original images if possible, plus the command lines you used