ODM icon indicating copy to clipboard operation
ODM copied to clipboard

Try to fix georeferencing accuracy

Open originlake opened this issue 8 months ago • 52 comments

This issue was discussed earlier here https://community.opendronemap.org/t/model-accuracy-issues-related-to-geocoord-exports/13054/1

The horizontal error can get up to meters when the GCPs are a few kilometer away from the dataset center. The change would definitely slow down the processing speed a bit, but I think such big errors shouldn't be acceptable. My proposed fix is

  1. Skip exporting geocoord in opensfm stage, so the file generated will all be in topocentric coordinate system.
  2. In the following stages, openmvs, filter_pointcloud, odm_meshing, odm_texturing will all generate assets in topocentric coordinate system, assets will include _topocentric in name. openmvs and odm_texturing do require the camera parameters remain unchanged to guarantee accuracy, so all these stages have to be kept in topocentric
  3. In odm_georeferencing stage, convert all the topocentric assets that are needed in the final output to UTM coordinate system, the conversion should be done point by point instead of using a rigid affine transformation matrix.
  4. The orthophoto and dsm will be generated with assets in UTM coordinate system.

The current implementation is done by using pdal-python to read and write the points, and use numpy to do bulk conversions. For obj file, I simply followed the implementation in the alignment method. For efficiency, if we can use newer version of PROJ lib, it supports topocentric conversion method, so we could define a proj pipeline and use pdal solely to do the conversion https://proj.org/en/stable/operations/conversions/topocentric.html#geocentric-to-topocentric-conversion. This should provide faster speed.

Currently I implemented converting odm_georeferenced_model.laz, point_cloud.ply and odm_textured_model_geo.obj, I need to check 25dmeshing related files as I don't normally use it. Let me know if any other file should be included.

You could find some error calculations numerically in the post above. Below is a comparison between orthophotos processed in two ways, the gcp is around 1km away from the dataset center

Rigid transfrom(current way) Screenshot from 2025-04-02 17-19-19 Screenshot from 2025-04-02 18-28-09

Point to point transform(proposed fix) Screenshot from 2025-04-02 17-19-24

originlake avatar Apr 02 '25 22:04 originlake

Hey thanks for the PR @originlake ! This is some good work on addressing the accuracy issue.

I'm concerned about introducing a topocentric version of all assets, would it not be possible to just do the point-by-point transformation in export_geocoords (and perhaps run bundle adjustment with cameras, GCPs and points fixed after the transformation to address small changes in camera rotations/angles, assuming that we can only transform coordinates and not rotations)?

pierotofy avatar Apr 02 '25 23:04 pierotofy

I think the 3D coordinate system with UTM zone is not an accurate 3D cartesian coordinate system. The horizontal part is a traverse mercator projection, the warping is different from point to point, 1 meter on map could not be accurate 1 meter in real world, and the vertical part is the ellipsoid height with respect to ellipsoidal earth surface, which is also not a simple offset from topocentric coordinate system. I think doing bundle adjustment in such coordinate system could lead to worse camera parameter accuracy and hence affecting the dense point cloud accuracy? After all the 3d model accuracy itself is good, it's just the coordinate system conversion is not done accurately.

originlake avatar Apr 02 '25 23:04 originlake

I'm not sure to what degree this would affect accuracy, but I have a feeling that it might possibly be better, currently the transformation is done post-SFM/MVS, so while the absolute accuracy is higher, your relative accuracy is probably a bit lower (the code warps the models). Bundle adjustment might be able to find an optimal middle ground between the two. It's something that I would need to test.

pierotofy avatar Apr 03 '25 04:04 pierotofy

IMHO, the problem is not the 3D model being warped hence inaccurate. The 3d reconstruction is designed to be done in topocentric coordinate system at the beginning because it's being an isotropic space. It's not suitable in projected coordinate system, projected coordinate system is non-isotropic, an extreme example, the notoriously mercator effect, the area of Greenland is larger than USA in web Mercator coordinate system. Assuming we have a dataset with drone taking photos covering the whole earth, we definitely can't use web mercator coordinates to do the bundle adjustment. https://en.wikipedia.org/wiki/Web_Mercator_projection#/media/File:Web_maps_Mercator_projection_SW.jpg

This doesn't mean a coordinate in the web-mercator CRS is wrong, it's just a reference that we can use to find the actual location on earth, when needed for accurate 3d space calculation, we can always convert it back to topocentric coordinate system. Usually in order to get good measurement accuracy, the surveyor will choose a coordinate system where center is defined closed to the survey region, some will even create a custom crs with origin center defined as their site center for best accuracy. They don't do topocentric, as far as I can tell, is because topocentric is inconvenient to do mapping on earth surface.

Same for the geo coordinate exporting, at this stage, it's not about making the model accurate, it's about do the right conversion to the target CRS so we can still use it to accurately find the exact spot on the earth, if we need to export UTM, we do the correct conversion to UTM, if we need to export to US state plane, we do the correct conversion to US state plane, we don't need to do bundle adjustment each time exporting to a different coordinate system. The point location is always unique on earth, but the coordinate representation is different in different coordinate systems.

Back to the problem, the 3D model itself is not warped after the conversion, its exact location on earth remains the same, it's just being represented in a warped coordinate system, doing measurement in such CRS is inaccurate, but it's the surveyor's responsibility to choose a better accurate CRS if needed.

Anyway, these are just in theory, the scale of a normal datasets would probably not have large errors. We can probably compare the reprojection errors to see the difference.

originlake avatar Apr 03 '25 14:04 originlake

Back to the problem, the 3D model itself is not warped after the conversion, its exact location on earth remains the same, it's just being represented in a warped coordinate system, doing measurement in such CRS is inaccurate, but it's the surveyor's responsibility to choose a better accurate CRS if needed.

Anyway, these are just in theory, the scale of a normal datasets would probably not have large errors. We can probably compare the reprojection errors to see the difference.

Yeah, bit of a classic problem. Actual reprojection errors are (probably) sufficiently small to be meaningless on most projects of most sizes. But, folks who care will want to know that the math is done correctly so they don't have to worry about unanticipated edge cases.

smathermather avatar Apr 03 '25 18:04 smathermather

From the Community, I mostly only see this issue raised by folks who have/use RTK GCPs, or have known survey markers they are comparing to and expect survey-grade accuracy with RTK-enabled input data.

So, a cross-section of industry, commercial, and academic folks (mostly), and the occasional very savvy home-gamer.

I think there is potential for a lot of benefit here.

Saijin-Naib avatar Apr 03 '25 18:04 Saijin-Naib

Indeed; does anyone have a small dataset with GCPs that clearly triggers this problem?

pierotofy avatar Apr 03 '25 18:04 pierotofy

https://community.opendronemap.org/t/call-for-datasets-gcps-drifting-at-boundaries/23934?u=saijin_naib

Saijin-Naib avatar Apr 03 '25 18:04 Saijin-Naib

We have seen the data, so lets hope some of it is readily sharable.

Saijin-Naib avatar Apr 03 '25 18:04 Saijin-Naib

@originlake I've looked more into this. I've implemented Kabsch's algorithm here (https://github.com/pierotofy/OpenSfM/commit/7366eecfa6c60bb55acbec07bdfd9adfe0c56d8c) which instead computes a rigid transformation by sampling points from the area covering the model (either via GCP locations, or a bbox computed from the camera poses).

I tested this on existing datasets with and without GCPs and seems to work on smaller areas, but I don't have a large dataset to test with, so perhaps you can try to replace opensfm/actions/export_geocoords.py with my copy and report back if it improves the results for your dataset?

If this works, then there's no need to keep separate models in topocentric coordinates.

pierotofy avatar Apr 03 '25 22:04 pierotofy

OK, I'll test your updates and report back. Unfortunately I can't share my dataset

originlake avatar Apr 03 '25 22:04 originlake

Also do note I might be wrong here, but it's difficult to try things without a dataset to test.

pierotofy avatar Apr 03 '25 23:04 pierotofy

It's hard to show it without zooming around the map, below are some screenshots around the markers, the error still exists. The error vectors, from the actual marker on the ground to the gcp coordinate point, is generally pointing to the center of the dataset. There are scaling involved as well, so rigid transform could not solve the problem. The conversion from topocentric coordinate system to projected coordinate system is non-linear, so affine transformation(linear transformation) would not able to solve it accurately, it can be approximated to minimize the target loss, but probably still can't be precise enough. Screenshot from 2025-04-03 21-54-11 Screenshot from 2025-04-03 21-55-26 Screenshot from 2025-04-03 21-55-44 Screenshot from 2025-04-03 21-56-00 Screenshot from 2025-04-03 21-56-15 Screenshot from 2025-04-03 21-56-32

originlake avatar Apr 04 '25 02:04 originlake

I created a colab notebook can simulate the issue. The Idea is to create a plane with grids of coordinates in topocentric coordinate system, easting, northing are ranged from -2000m to 2000m, elevation is set to 1000. Every points in the grid will be converted to UTM representing the most accurate conversion as a baseline. Then can apply different method(Kabsch algorithm is used) to compute the errors between the baseline.

At the bottom, I plotted the errors between two conversions, separated horizontal error and elevation error for better visualization. You can see the both the horizontal errors and vertical errors are non-uniform, and the error vector's direction is different.

You can check it here, feel free to modify it https://colab.research.google.com/drive/1J3VBcmLbZ4YqVTrNXSisbjDr41E75ek3#scrollTo=lEMSfsirnm8T

originlake avatar Apr 04 '25 02:04 originlake

Thanks for testing and the colab. It makes sense.

Keeping everything in topocentric seems like the way to go, I do have some concerns around performance (in particular for the PDAL pipeline stuff) and the extra 3D models, especially for multispectral datasets (which are missing currently) and the 2.5D only workflows where a 3D model is not built. Also disk usage concerns for downstream applications like NodeODM, which would currently include both textured models (topo and UTM), which is no good. Plus careful review and tons of testing, which is needed for changes like this.

As a heads up, it might take me a while to take an active role in doing all of the above, due to other work priorities, but I do think we need to include this fix because it's critical for accurate results. :pray:

pierotofy avatar Apr 04 '25 07:04 pierotofy

understandable, I was unsure about whether to keep a standalone topocentric file or simply overwrite it. But in order to make rerun work properly, this is the only way I could think off. Maybe you can come up with a better idea.

For texture model conversion, only the obj file needs to be converted. The implementation I followed in the alignment feature, simply read the vertex and do conversion line by line, which should be a lot less efficient than using numpy to convert all vertices at once. Should probably introduce an efficient parser for better performance.

For point cloud conversion, if we can have a newer version of PROJ lib, version 8.0 at least, with this new feature combined with pdal filters.projpipeline , should allow us to get rid of the python part of conversion to improve the speed.

Let me know what I can help with.

originlake avatar Apr 04 '25 14:04 originlake

Updated the code to convert all geomodel objs, the alignment part provided good reference for me to use. Still need to do GLTF conversion. But I think we should probably move the GLTF exporting to somewhere after the georeferencing stage, I noticed that the alignment section doesn't align GLTF file either.

originlake avatar Apr 04 '25 22:04 originlake

I would expect a lot of things to be needing re-arrangement (and that's not a trivial thing to do, because there's subtle things that require assets to be generated in a certain order, depending on flags). That's why I was hoping that transforming the coordinate system + running bundle adjustment would work. It would make the fix a lot easier.

When I have time (and a dataset) I think I will try this approach. I know it's not ideal in theory, but we're talking about differences of centimeters (over a large area) and I'm just plain curious to see what would happen in practice.

pierotofy avatar Apr 05 '25 23:04 pierotofy

Note to self: all calls to opensfm_reconstruction_average_gsd would break with this PR because it assumes a Z-up axis, but that can only be guaranteed after georeferencing.

Also possible memory/disk issues with 2.5D mesh generation, as the DEM generation assumes georeferenced units in order to not explode.

Edit: same for the spacing estimation in odm_filterpoints.

pierotofy avatar Apr 09 '25 18:04 pierotofy

I have a "small" dataset that I can probably share, that has this error showing. It is just under 2000 images gcp

nbnielsen avatar Apr 20 '25 03:04 nbnielsen

I have a dataset where I captured 4 x Rugby fields two ways - once just flying the programmed route and once with RTK. Both sets have been processed with GCPs and results are pretty close. I can provide all or some of the data. just let me know what you want and how much of it.

VPA-Dave avatar May 26 '25 22:05 VPA-Dave

Thanks. Is this a dataset or set of datasets that exhibit a behavior relevant to the discussion? If do, the smallest possible subset that does so would be useful.

smathermather avatar Jun 02 '25 15:06 smathermather

I'b be happy to create a set of data while im out in the field. Just let me know how many images, GSD, area etc you would like.

jasonowsley avatar Jun 15 '25 12:06 jasonowsley

I'b be happy to create a set of data while im out in the field. Just let me know how many images, GSD, area etc you would like.

jasonowsley avatar Jun 15 '25 12:06 jasonowsley

Ideally, the smallest dataset you can get (ideally < 50-100 images) spanning the largest area possible (thus highest altitude legally allowed), with a rectangular/square coverage (again ideally, no corridor patterns).

Also 5-8 GCPs.

pierotofy avatar Jun 15 '25 14:06 pierotofy

Here is a link to a dataset with the gcp file

https://u.pcloud.link/publink/show?code=kZUwVu5Z7McgcJKgvCzGvbxAlMoFfSRVdaE7

jasonowsley avatar Jun 16 '25 21:06 jasonowsley

Nice! I noticed the GCP file doesn't show the association to the pictures.

EPSG:4326
GCP1 -104.007793951133 32.1224660000333 903.75902
GCP2 -104.0077919565 32.1219366718333 903.70224
...

It's missing the im_x im_y image_name fields? https://docs.opendronemap.org/gcp/#gcp-file-format

pierotofy avatar Jun 17 '25 19:06 pierotofy

Yeah, I just made a reference file to input into WebODM’s georeferencer. Figured you'd want to align the GCPs in your workflow. I’d be happy to map it completely if you like.

jasonowsley avatar Jun 17 '25 20:06 jasonowsley

I'm just unsure which point would correspond to which marker in the images, but any help would be appreciated!

pierotofy avatar Jun 17 '25 20:06 pierotofy

Ok ive added a fully referenced GCP file. one in the root and one in thephoto directory.

jasonowsley avatar Jun 18 '25 02:06 jasonowsley