opendr
opendr copied to clipboard
Fix detectron2 installation and skimage deprecated functions
This PR updates the toolkit to point to a specific (and tested) commit of detectron2 in order to avoid breaking the installation when changes occur in detectron repository. This PR also removes the use of a deprecated function of skimage (used in human model generation) and replaces it with a supported one.
The single demonstration grasp issue seems to have been resolved (it was caused by an updated version of detectron). However, there seems to be also one other issue with a simulation tool. I am also looking into this. I am converting this draft until this is also sorted out.
@passalis In newer scikit-image versions, this function is deprecated and needs to change to "verts, faces, normals, values = measure.marching_cubes(sdf, 0.5)" https://github.com/opendr-eu/opendr/blob/3a9a119f2f4391c3d9ba5f2f998400ffb61f72cf/src/opendr/simulation/human_model_generation/utilities/PIFu/lib/mesh_util.py#L45
Thank you @charsyme. I think this is now ready. I will wait for the tests to pass and I will mark this ready for review.
This is now ready for review.
I think the failures are unrelated to this PR. I will try running the tests once more.
I think we are in a deadlock... This PR needs #207 for passing 3D detection tests (which now seem to fail very often), while #207 needs this PR for solving grasping and simulation failures. 2D tracking test failure is due to a download issue. I will try running the tests once again...
I think we are in a deadlock... This PR needs #207 for passing 3D detection tests (which now seem to fail very often), while #207 needs this PR for solving grasping and simulation failures. 2D tracking test failure is due to a download issue. I will try running the tests once again...
Given that the master branch is currently in a odd state anyway, personally I think it is ok to merge this PR and then work at the remaining errors in a different PR.
Should we perhaps disable fast-failure in the CI? Currently as soon as one test fails, the entire job is canceled. This is positive as it frees the machines for other PRs, but at the same when we have issues with a different test (for instance out of memory), you can't even know if your change works since others might fail before yours gets the chance to run