OpenSfM icon indicating copy to clipboard operation
OpenSfM copied to clipboard

Improve performance on large dataset

Open YanNoun opened this issue 4 months ago • 3 comments

This PR improves the performance of the incremental SfM stage on large datasets (several thousand images) and dense datasets (high redundancy and/or high number of features per images) :

  • Using METIS + SuiteSparse improves factorization of Cholesky and thus especially on large and dense BA problems, and on many-cores machines.
  • Simple grid decimation bounds the complexity of BA problems. This is the main source of speed-up, especially on dense (high feature count) projects.
  • On-the-fly candidates computation is now constant : large gain on very large datasets (7K images)
  • Finally, conservative image preemption speeds-up high-overlap datasets.

Using quality reports and GCP datasets (aerial), no difference in quality could be noticed.

On high-features dataset (20K-40K features) : x3 speed-up BASE report_base.pdf OPTIM-LARGE report_speedup.pdf

On large+dense dataset (7K images) : from endless to < 2 hours BASE More than 12 hours. Stopped manually OPTIM-LARGE report_subset08.pdf

On weak connection dataset : 10% speed-up BASE report_base.pdf OPTIM-LARGE report_spedup.pdf

Moderatly dense dataset : x2 speed-up BASE report.pdf OPTIM-LARGE report_speedup.pdf

YanNoun avatar Aug 21 '25 15:08 YanNoun

@paulinus has imported this pull request. If you are a Meta employee, you can view this in D81898646.

facebook-github-bot avatar Sep 08 '25 07:09 facebook-github-bot

What is the status of this PR? What does "@paulinus has imported this pull request" mean?

NathanMOlson avatar Nov 12 '25 20:11 NathanMOlson

@YanNoun @paulinus What is the status of this PR? What does "@paulinus has imported this pull request" mean?

NathanMOlson avatar Dec 01 '25 19:12 NathanMOlson