openMVS
openMVS copied to clipboard
DensifyPointCloud: Speed up tips & tricks
Thanks a ton for the beautiful work!
I want to ask if we have any tips or tricks to speed up DensifyPointCloud
with masking. I have a scene containing 268 images with 4624x3468 resolutions for each image. Resizing the images may lead to missing segments in the masks. FWIW, most processing time goes to generating and filtering depth maps (~45mins).
DensifyPointCloud scene.mvs --mask-path masks_omvs/ --ignore-mask-label 255 --filter-point-cloud 1
you can use --max-resolution 1024
or smaller
Thanks for the quick answer!
It is dramatically boosted to 6m32s!
Upon examining the PointCloud construction, I noticed that there might be a correlation between the holes and the --max-resolution
parameter. Are there any other parameters that I want to consider when I change --max-resolution
?
--max-resolution 512
--max-resolution 1024
--max-resolution 2560
(default)
this is expected for texturesless surfaces
On Tue, Aug 22, 2023 at 5:25 PM Ahmad Al-Mughrabi @.***> wrote:
Thanks for the quick answer!
It is dramatically boosted to 6m32s!
Upon examining the PointCloud construction, I noticed that there might be a correlation between the holes and the --max-resolution parameter. Are there any other parameters that I want to consider when I change --max-resolution?
--max-resolution 512 [image: 512] https://user-images.githubusercontent.com/8401456/262382356-4dbf54f5-66db-4ba5-b811-c6c7272f8856.gif
--max-resolution 1024
[image: 1024] https://user-images.githubusercontent.com/8401456/262382545-4303963e-06d5-4439-a4cd-7b5861ef0cb5.gif
--max-resolution 2560 (default) [image: default-2560] https://user-images.githubusercontent.com/8401456/262382733-d5c29445-65fc-4944-86b4-f1cb205b030c.gif
— Reply to this email directly, view it on GitHub https://github.com/cdcseacave/openMVS/issues/1048#issuecomment-1688293809, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAVMH3RWU4BFAPB5XOD7H7LXWS6NTANCNFSM6AAAAAA3ZRQV44 . You are receiving this because you commented.Message ID: @.***>
@cdcseacave - are there other parameters/flags I can consider for further speed optimisations?
No, but if you were to use scene clusters and recompute depth maps for each scene because of limited storage space, multiple NVIDIA GPUs could be scripted.
@4CJ7T - thanks for your reply! AFAICT, I am not using the scene clusters, but I am still writing 286 depth maps with filtered ones to the disk. Later, we removed them using -remove-dmaps
, which sounds inefficient. Is there a way to keep them in memory instead?
--remove-dmaps
removes depth maps after fusion to clear storage space.
Sounds reasonable. Can you advise how to stop writing depth maps to disk?
There is no option for that currently.
There is no option for that currently.
Can you please point me out where I can do this modification? Using SSDs does not eliminate the cost of IO.
writing DMAPs to disk is usually necessary, unless you have infinite memory
writing DMAPs to disk is usually necessary, unless you have infinite memory
AFAICT, It depends on the use case. For the small scenes (~50 depth maps), it is useful not to write them on the disk for time-critical applications. I want to avoid the IO. The memory is not a problem. Can we at least have a flag to stop writing?
it is not so easy, you need to change the code to add this option, each time it tries to save / load from disk
it is not so easy, you need to change the code to add this option, each time it tries to save / load from disk
Very well, @cdcseacave! I can put a hand on this, but I want some guidance.
The idea is simple, the depth-map data needs to be available during the various stages, so if you do not save it to disk, you need not to release it either. Find all places where DepthData
is serialized to DMAP file, and disable both saving and releasing it.