OpenSfM icon indicating copy to clipboard operation
OpenSfM copied to clipboard

Any suggestions to improve the speed of reconstructing 360 images?

Open Bin-ze opened this issue 1 year ago • 0 comments

Great job!

I'm trying to use opensfm to reconstruct 360 images

I followed the official tutorial and successfully reconstructed the collected 360 images. My configuration file is as follows:

’‘’ use_exif_size: yes unknown_camera_models_are_different: no # Treat images from unknown camera models as comming from different cameras default_focal_prior: 0.85

Params for features

feature_type: SIFT # Feature type (AKAZE, SURF, SIFT, HAHOG, ORB) feature_root: 1 # If 1, apply square root mapping to features feature_min_frames: 4000 # If fewer frames are detected, sift_peak_threshold/surf_hessian_threshold is reduced. feature_min_frames_panorama: 16000 # Same as above but for panorama images feature_process_size: 2048 # Resize the image if its size is larger than specified. Set to -1 for original size feature_process_size_panorama: 2048 # Same as above but for panorama images feature_use_adaptive_suppression: no features_bake_segmentation: no # Bake segmentation info (class and instance) in the feature data. Thus it is done once for all at extraction time.

Params for SIFT

sift_peak_threshold: 0.1 # Smaller value -> more features sift_edge_threshold: 10 # See OpenCV doc

Params for SURF

surf_hessian_threshold: 3000 # Smaller value -> more features surf_n_octaves: 4 # See OpenCV doc surf_n_octavelayers: 2 # See OpenCV doc surf_upright: 0 # See OpenCV doc

Params for AKAZE (See details in lib/src/third_party/akaze/AKAZEConfig.h)

akaze_omax: 4 # Maximum octave evolution of the image 2^sigma (coarsest scale sigma units) akaze_dthreshold: 0.001 # Detector response threshold to accept point akaze_descriptor: MSURF # Feature type akaze_descriptor_size: 0 # Size of the descriptor in bits. 0->Full size akaze_descriptor_channels: 3 # Number of feature channels (1,2,3) akaze_kcontrast_percentile: 0.7 akaze_use_isotropic_diffusion: no

Params for HAHOG

hahog_peak_threshold: 0.00001 hahog_edge_threshold: 10 hahog_normalize_to_uchar: yes

Params for general matching

lowes_ratio: 0.8 # Ratio test for matches matcher_type: FLANN # FLANN, BRUTEFORCE, or WORDS symmetric_matching: yes # Match symmetricly or one-way

Params for FLANN matching

flann_algorithm: KMEANS # Algorithm type (KMEANS, KDTREE) flann_branching: 8 # See OpenCV doc flann_iterations: 10 # See OpenCV doc flann_tree: 8 # See OpenCV doc flann_checks: 20 # Smaller -> Faster (but might lose good matches)

Params for BoW matching

bow_file: bow_hahog_root_uchar_10000.npz bow_words_to_match: 50 # Number of words to explore per feature. bow_num_checks: 20 # Number of matching features to check. bow_matcher_type: FLANN # Matcher type to assign words to features

Params for VLAD matching

vlad_file: bow_hahog_root_uchar_64.npz

Params for matching

matching_gps_distance: 150 # Maximum gps distance between two images for matching matching_gps_neighbors: 4 # Number of images to match selected by GPS distance. Set to 0 to use no limit (or disable if matching_gps_distance is also 0) matching_time_neighbors: 0 # Number of images to match selected by time taken. Set to 0 to disable matching_order_neighbors: 0 # Number of images to match selected by image name. Set to 0 to disable matching_bow_neighbors: 0 # Number of images to match selected by BoW distance. Set to 0 to disable matching_bow_gps_distance: 0 # Maximum GPS distance for preempting images before using selection by BoW distance. Set to 0 to disable matching_bow_gps_neighbors: 0 # Number of images (selected by GPS distance) to preempt before using selection by BoW distance. Set to 0 to use no limit (or disable if matching_bow_gps_distance is also 0) matching_bow_other_cameras: False # If True, BoW image selection will use N neighbors from the same camera + N neighbors from any different camera. If False, the selection will take the nearest neighbors from all cameras. matching_vlad_neighbors: 0 # Number of images to match selected by VLAD distance. Set to 0 to disable matching_vlad_gps_distance: 0 # Maximum GPS distance for preempting images before using selection by VLAD distance. Set to 0 to disable matching_vlad_gps_neighbors: 0 # Number of images (selected by GPS distance) to preempt before using selection by VLAD distance. Set to 0 to use no limit (or disable if matching_vlad_gps_distance is also 0) matching_vlad_other_cameras: False # If True, VLAD image selection will use N neighbors from the same camera + N neighbors from any different camera. If False, the selection will take the nearest neighbors from all cameras. matching_graph_rounds: 0 # Number of rounds to run when running triangulation-based pair selection matching_use_filters: False # If True, removes static matches using ad-hoc heuristics matching_use_segmentation: no # Use segmentation information (if available) to improve matching

Params for geometric estimation

robust_matching_threshold: 0.004 # Outlier threshold for fundamental matrix estimation as portion of image width robust_matching_calib_threshold: 0.004 # Outlier threshold for essential matrix estimation during matching in radians robust_matching_min_match: 20 # Minimum number of matches to accept matches between two images five_point_algo_threshold: 0.004 # Outlier threshold for essential matrix estimation during incremental reconstruction in radians five_point_algo_min_inliers: 20 # Minimum number of inliers for considering a two view reconstruction valid five_point_refine_match_iterations: 10 # Number of LM iterations to run when refining relative pose during matching five_point_refine_rec_iterations: 1000 # Number of LM iterations to run when refining relative pose during reconstruction triangulation_threshold: 0.006 # Outlier threshold for accepting a triangulated point in radians triangulation_min_ray_angle: 1.0 # Minimum angle between views to accept a triangulated point triangulation_type: FULL # Triangulation type : either considering all rays (FULL), or sing a RANSAC variant (ROBUST) resection_threshold: 0.004 # Outlier threshold for resection in radians resection_min_inliers: 10 # Minimum number of resection inliers to accept it

Params for track creation

min_track_length: 2 # Minimum number of features/images per track

Params for bundle adjustment

loss_function: SoftLOneLoss # Loss function for the ceres problem (see: http://ceres-solver.org/modeling.html#lossfunction) loss_function_threshold: 1 # Threshold on the squared residuals. Usually cost is quadratic for smaller residuals and sub-quadratic above. reprojection_error_sd: 0.004 # The standard deviation of the reprojection error exif_focal_sd: 0.01 # The standard deviation of the exif focal length in log-scale principal_point_sd: 0.01 # The standard deviation of the principal point coordinates radial_distortion_k1_sd: 0.01 # The standard deviation of the first radial distortion parameter radial_distortion_k2_sd: 0.01 # The standard deviation of the second radial distortion parameter radial_distortion_k3_sd: 0.01 # The standard deviation of the third radial distortion parameter radial_distortion_k4_sd: 0.01 # The standard deviation of the fourth radial distortion parameter tangential_distortion_p1_sd: 0.01 # The standard deviation of the first tangential distortion parameter tangential_distortion_p2_sd: 0.01 # The standard deviation of the second tangential distortion parameter gcp_horizontal_sd: 0.01 # The default horizontal standard deviation of the GCPs (in meters) gcp_vertical_sd: 0.1 # The default vertical standard deviation of the GCPs (in meters) rig_translation_sd: 0.1 # The standard deviation of the rig translation rig_rotation_sd: 0.1 # The standard deviation of the rig rotation bundle_outlier_filtering_type: AUTO # Type of threshold for filtering outlier : either fixed value (FIXED) or based on actual distribution (AUTO) bundle_outlier_auto_ratio: 3.0 # For AUTO filtering type, projections with larger reprojection than ratio-times-mean, are removed bundle_outlier_fixed_threshold: 0.006 # For FIXED filtering type, projections with larger reprojection error after bundle adjustment are removed optimize_camera_parameters: yes # Optimize internal camera parameters during bundle bundle_max_iterations: 100 # Maximum optimizer iterations.

retriangulation: yes # Retriangulate all points from time to time retriangulation_ratio: 1.2 # Retriangulate when the number of points grows by this ratio bundle_analytic_derivatives: yes # Use analytic derivatives or auto-differentiated ones during bundle adjustment bundle_interval: 999999 # Bundle after adding 'bundle_interval' cameras bundle_new_points_ratio: 1.2 # Bundle when the number of points grows by this ratio local_bundle_radius: 3 # Max image graph distance for images to be included in local bundle adjustment local_bundle_min_common_points: 50 # Minimum number of common points betwenn images to be considered neighbors local_bundle_max_shots: 30 # Max number of shots to optimize during local bundle adjustment

save_partial_reconstructions: no # Save reconstructions at every iteration

Params for GPS alignment

use_altitude_tag: no # Use or ignore EXIF altitude tag align_method: auto # orientation_prior or naive align_orientation_prior: horizontal # horizontal, vertical or no_roll bundle_use_gps: yes # Enforce GPS position in bundle adjustment bundle_use_gcp: no # Enforce Ground Control Point position in bundle adjustment bundle_compensate_gps_bias: no # Compensate GPS with a per-camera similarity transform

Params for rigs

rig_calibration_subset_size: 15 # Number of rig instances to use when calibration rigs rig_calibration_completeness: 0.85 # Ratio of reconstructed images needed to consider a reconstruction for rig calibration rig_calibration_max_rounds: 10 # Number of SfM tentatives to run until we get a satisfying reconstruction

Params for image undistortion

undistorted_image_format: jpg # Format in which to save the undistorted images undistorted_image_max_size: 100000 # Max width and height of the undistorted image

Params for depth estimation

depthmap_method: PATCH_MATCH_SAMPLE # Raw depthmap computation algorithm (PATCH_MATCH, BRUTE_FORCE, PATCH_MATCH_SAMPLE) depthmap_resolution: 640 # Resolution of the depth maps depthmap_num_neighbors: 10 # Number of neighboring views depthmap_num_matching_views: 6 # Number of neighboring views used for each depthmaps depthmap_min_depth: 0 # Minimum depth in meters. Set to 0 to auto-infer from the reconstruction. depthmap_max_depth: 0 # Maximum depth in meters. Set to 0 to auto-infer from the reconstruction. depthmap_patchmatch_iterations: 3 # Number of PatchMatch iterations to run depthmap_patch_size: 7 # Size of the correlation patch depthmap_min_patch_sd: 1.0 # Patches with lower standard deviation are ignored depthmap_min_correlation_score: 0.1 # Minimum correlation score to accept a depth value depthmap_same_depth_threshold: 0.01 # Threshold to measure depth closeness depthmap_min_consistent_views: 3 # Min number of views that should reconstruct a point for it to be valid depthmap_save_debug_files: no # Save debug files with partial reconstruction results

Other params

processes: 1 # Number of threads to use read_processes: 4 # When processes > 1, number of threads used for reading images

Params for submodel split and merge

submodel_size: 80 # Average number of images per submodel submodel_overlap: 30.0 # Radius of the overlapping region between submodels submodels_relpath: "submodels" # Relative path to the submodels directory submodel_relpath_template: "submodels/submodel_%04d" # Template to generate the relative path to a submodel directory submodel_images_relpath_template: "submodels/submodel_%04d/images" # Template to generate the relative path to a submodel images directory ‘’‘

However, problems were encountered in large-scale scene reconstruction:

  1. I first compared the time consumption of reconstructing some scenes: image

The resolution here refers to feature_process_size_panorama:2048 in the configuration file. Most of the time is spent on feature matching, and the size of my image is not large. How can I speed up image matching? For example, what configuration file should I modify?

  1. I am rebuilding inside docker. During the rebuilding process, only a single-core CPU is used. Can I accelerate it here? My server has more than 50 cores.

  2. I looked at https://opensfm.org/docs/large.html about splitting sub-scenes to speed up the process, but my images don't contain gps information, Therefore I tried to generate image_groups.txt manually specifying the clustering information,when run :

bin/opensfm create_submodels /data/360_data/opensfm/scene_split_exp/space_partition

but error:

2024-01-29 03:02:46,073 WARNING: Skipping 0055.jpg because of missing GPS 2024-01-29 03:02:46,073 WARNING: Skipping 0041.jpg because of missing GPS 2024-01-29 03:02:46,073 WARNING: Skipping 0096.jpg because of missing GPS 2024-01-29 03:02:46,073 WARNING: Skipping 0082.jpg because of missing GPS 2024-01-29 03:02:46,073 WARNING: Skipping 0257.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0243.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0294.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0280.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0323.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0337.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0109.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0121.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0135.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0134.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0120.jpg because of missing GPS 2024-01-29 03:02:46,074 WARNING: Skipping 0108.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0336.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0322.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0281.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0295.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0242.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0256.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0083.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0097.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0040.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0054.jpg because of missing GPS 2024-01-29 03:02:46,075 WARNING: Skipping 0068.jpg because of missing GPS /source/OpenSfM/opensfm/actions/create_submodels.py:80: RuntimeWarning: invalid value encountered in divide centers /= centers_count /usr/local/lib/python3.8/dist-packages/numpy/core/fromnumeric.py:3464: RuntimeWarning: Mean of empty slice. return _methods._mean(a, axis=axis, dtype=dtype, /usr/local/lib/python3.8/dist-packages/numpy/core/_methods.py:192: RuntimeWarning: invalid value encountered in divide ret = ret.dtype.type(ret / rcount) Traceback (most recent call last): File "/source/OpenSfM/bin/opensfm_main.py", line 33, in main() # pragma: no cover File "/source/OpenSfM/bin/opensfm_main.py", line 25, in main commands.command_runner( File "/source/OpenSfM/opensfm/commands/command_runner.py", line 38, in command_runner command.run(data, args) File "/source/OpenSfM/opensfm/commands/command.py", line 13, in run self.run_impl(data, args) File "/source/OpenSfM/opensfm/commands/create_submodels.py", line 13, in run_impl create_submodels.run_dataset(dataset) File "/source/OpenSfM/opensfm/actions/create_submodels.py", line 27, in run_dataset _add_cluster_neighbors(meta_data, data.config["submodel_overlap"]) File "/source/OpenSfM/opensfm/actions/create_submodels.py", line 108, in _add_cluster_neighbors clusters = tools.add_cluster_neighbors(positions, labels, centers, max_distance) File "/source/OpenSfM/opensfm/large/tools.py", line 47, in add_cluster_neighbors reference = geo.TopocentricConverter(reflla[0], reflla[1], 0) IndexError: invalid index to scalar variable.

How should this problem be solved?

Very much looking forward to your reply!

Bin-ze avatar Jan 29 '24 10:01 Bin-ze