fg2018-competition
fg2018-competition copied to clipboard
Using evaluation with own scans
Hi, Thank you for sharing this evaluation with the community. I am wonder if is it possible to use the code with my own scans and evaluate some predicted meshes with my own ground truth? I have the obj. files and computed the landmarks manually. I tried to adapt the code but having some issues: Segmentation fault (core dumped). When processing multiple files, here is the code adapted:
import os
import pymesh
import compute_vertices_to_mesh_distances as fg
##Define the folder paths for ground truth scans, predicted meshes, and predicted landmarks
groundtruth_folder = 'groundtruth_scans'
groundtruth_landmarks_folder = 'groundtruth_landmarks'
predicted_meshes_folder = 'predicted_meshes'
predicted_landmarks_folder = 'predicted_landmarks'
## Get a list of all files in the ground truth folder
groundtruth_files = os.listdir(groundtruth_folder)
## Iterate through the files
for groundtruth_file in groundtruth_files:
# Load ground truth scan and its landmark annotations
groundtruth_scan = pymesh.load_mesh(os.path.join(groundtruth_folder, groundtruth_file))
# Generate the predicted file paths
pred_mesh_file = f'predicted_{os.path.splitext(groundtruth_file)[0]}.obj'
pred_landmarks_file = f'predicted_{os.path.splitext(groundtruth_file)[0]}_landmarks.txt'
# Load predicted mesh
predicted_mesh = pymesh.load_mesh(os.path.join(predicted_meshes_folder, pred_mesh_file))
# Read predicted landmarks from the file
predicted_landmark_points = []
with open(os.path.join(predicted_landmarks_folder, pred_landmarks_file), 'r') as file:
for line in file:
coords = line.strip().strip('[],').split(',')
landmark = [float(coord.strip()) for coord in coords]
predicted_landmark_points.append(landmark)
# Read ground truth landmark annotations from the separate folder
grundtruth_landmark_file = f'{os.path.splitext(groundtruth_file)[0]}_landmarks.lnd'
grundtruth_landmark_points = fg.read_groundtruth(os.path.join(groundtruth_landmarks_folder, grundtruth_landmark_file))
# Compute the errors and save to a file
out_file = f"{os.path.splitext(pred_mesh_file)[0]}_{os.path.splitext(pred_landmarks_file)[0]}_distances.txt"
fg.compute_vertices_to_mesh_distances(groundtruth_scan.vertices, grundtruth_landmark_points, predicted_mesh.vertices,
predicted_mesh.faces, predicted_landmark_points, out_file)
print(f"Computed and saved distances for {groundtruth_file}")
Hi Natalia,
Thanks for your interest in the dataset/benchmark. Yes, you can definitely run/adapt the script to run on your own scans and ground-truth.
You say you're getting a segmentation fault - Python segfaults quite rarely in my experience, and if it does, there is something going awry on the system level. Regular Python code shouldn't normally segfault. It could have something to do with reading the files - maybe pymesh.load_mesh
segfaults on one of your scans, or something to do with reading the landmarks.
What I would do here is run your script in the debugger (using VSCode or PyCharm), and step through it line-by-line. Then it'll be easy to catch on which line exactly it segfaults, and from there, it should hopefully be possible to determine the why and how to fix it.
Let me know how it goes!
Best wishes, Patrik
Hi Patrik,
Thank you for your prompt response! I've successfully debugged the issue you mentioned, and indeed, there were errors in the landmarks files that were causing the segmentation fault during the calculation.
After making the necessary corrections to the predicted and own scans' landmarks, I've managed to resolve the problem. However, I would like to inquire about the process you followed to calculate the 7 landmarks and their corresponding positions on the meshes. Was this achieved through an automated algorithm? I'm particularly interested in exploring whether there's an established approach that could be applied to accurately identify these landmarks on the meshes, thereby reducing potential errors. Additionally, I'm wondering if such a method could be run in batch mode for multiple files.
Once again, thanks Patrik, for your assistance and for sharing this valuable work with the community.
Natalia.