Improve performance on large dataset
This PR improves the performance of the incremental SfM stage on large datasets (several thousand images) and dense datasets (high redundancy and/or high number of features per images) :
- Using METIS + SuiteSparse improves factorization of Cholesky and thus especially on large and dense BA problems, and on many-cores machines.
- Simple grid decimation bounds the complexity of BA problems. This is the main source of speed-up, especially on dense (high feature count) projects.
- On-the-fly candidates computation is now constant : large gain on very large datasets (7K images)
- Finally, conservative image preemption speeds-up high-overlap datasets.
Using quality reports and GCP datasets (aerial), no difference in quality could be noticed.
On high-features dataset (20K-40K features) : x3 speed-up BASE report_base.pdf OPTIM-LARGE report_speedup.pdf
On large+dense dataset (7K images) : from endless to < 2 hours BASE More than 12 hours. Stopped manually OPTIM-LARGE report_subset08.pdf
On weak connection dataset : 10% speed-up BASE report_base.pdf OPTIM-LARGE report_spedup.pdf
Moderatly dense dataset : x2 speed-up BASE report.pdf OPTIM-LARGE report_speedup.pdf
@paulinus has imported this pull request. If you are a Meta employee, you can view this in D81898646.
What is the status of this PR? What does "@paulinus has imported this pull request" mean?
@YanNoun @paulinus What is the status of this PR? What does "@paulinus has imported this pull request" mean?