OpenSfM
OpenSfM copied to clipboard
[QUESTION] Project shots over flat plane.
Hello, and thank you for the project and your efforts on this project. :slightly_smiling_face:
I would like to know if you could, please, throw some light to how to project all the images over a plane. More precisely I have some drone pictures taken at high altitude (no perspective affected) and I would like to put them all together like it can be seen in the viewer. I'm setting the use_altitude_tag
to True in the config during the reconstruction.
My code so far is:
project_path = </path/to/project>
data = DataSet(project_path)
rec: Reconstruction = data.load_reconstruction()[0]
ref: TopocentricConverter = rec.reference
# Plane will be created at the mean altitude (z position) of 3D points
points = rec.get_points().values()
plane_altitude: LandmarkView = np.median([p.coordinates[2] for p in points])
for shot_id in rec.get_shots():
shot: Shot = rec.shots[shot_id]
cam: Camera = shot.camera
pose: Pose = shot.pose
# Scale
x,y,z = pose.get_origin()
z_scale = z - plane_altitude
# Rotation
# I think the main problem is here (?):
# I ignore the Z axis because I suppose shot is "almost" facing down
rot_matrix = pose.get_rotation_matrix()[:2,:2].T
rot_matrix /= z_scale # Set scale at floor level so images have correct size (is it correct?)
# Translation
x,y,z = ref.to_lla(x,y,z) # Convert to GPS coordinates
# Internal camera parameters
K = cam.get_K()
# Create affine to georeference image
affine = np.eye(3)
affine[:2,:2] = rot_matrix # Insert rotation and scale
affine[:2,2] = np.array((x,y)) # Insert translation
K_inv = np.linalg.inv(K)
affine = affine.dot(K_inv) # Add K to image
## Follows the code to insert Affine into TIF ##
Is this approximation correct? Once this is solved I would really like to help improving OpenSFM docs. Thank you in advance.
Hi @Matesanz ,
Is the example image from the Mapillary JS viewer? If yes, why don't you just copy the "visualization" code from there? I don't really get what you want to achieve but if you just want a PCL of the images on a plane, you could try the following:
- iterate through the shots
- read the image of the each shot
- for each point in the image (x,y):
- unproject it a 3D point that is `altitude` or some arbitrary `z` away. `np.linalg.inv(K).dot([x,y,1])*altitude`. You now have a pcl, where each point is `altitude` away from the camera but you're still in the camera coordinate system
- transform the 3D pcl to the world using the `shot.pose.get_cam_to_world().dot(np.hstack([pt3d,1]))`. Now you have the planar PCL in the world.
Best, Fabian
Thank you so much for the help.
I will try your algorithm out and let you know. :slightly_smiling_face:
I want to thank you (and the community) improving the docs that opensfm already has. I knew a little about the topic and still I had to struggle a bit into the code. And I think my experience will help others willing to use OpenSFM. Just let me know how to do it and I will be pleased to do it. :ok_hand:
Nice Regards.
I've seen that the depthmap stage generates a bunch of files for every image.
If I load the clean version it comes with 3 different fields: plane, depth, and score
.
Is plane
suppose to be the x,y,z values in camera coordinates for every pixel?
I've plotted the z values and the result doesn't look like the one stored in the depth
field.
z in plane | depth |
---|---|
![]() |
![]() |
@fabianschenk Could you please tell me what do they represent? Thanks in advance
A bit off-topic perhaps, but you could use a tool such as ddb geoproj
to do this: https://docs.dronedb.app/commands/geoproj.html
(disclaimer: I wrote the tool).
Hello @pierotofy and thank you so much for the answer. (btw: I admire you so much for your work at ODM.)
I'll check it out :)