feedback icon indicating copy to clipboard operation
feedback copied to clipboard

[cloud-app-engine-springboot]:

Open cuong02n opened this issue 2 years ago • 0 comments

image

What a time, I follow all step in tutorial : https://codelabs.developers.google.com/codelabs/cloud-app-engine-springboot#9

But that task run with infinity time.

cuong02n avatar Nov 30 '23 09:11 cuong02n

I would run cargo doc --open to check out the new doc rendering and make sure its alright.

vadixidav avatar Dec 28 '19 02:12 vadixidav

@mpizenberg Whenever you get a chance, please review this. I would like to make any changes you would like prior to merge. After this is merged, I would like the new version to be published so I can publish pnp. The pnp crate utilizes lambda-twist from this crate followed by Levenberg-Marquardt optimization. The levenberg-marquardt crate is currently published.

I am going to start working on the estimator for relative pose from 5 point correspondences using the most recent algorithm by Nister: https://github.com/rust-cv/nister-stewenius.

After doing that I will utilize this crate and that crate to create an indirect visual odometry application. I will follow that up shortly thereafter with vSLAM.

Just let me know if you want me to write something a bit differently here, leave any comments, or simplify/clarify some code. I will act quickly to get the PR ready for merging.

vadixidav avatar Jan 12 '20 21:01 vadixidav

I was writing something up for the Jacobian of the re-projection error today. I haven't finished but I'll send it to you when I do.

It took me long but I've finally taken some time to review this. I have many remarks about the changes. Globally I'm not really in favor of this PR. I'll explain my thoughts. Let me know what you think.

About dependencies

Right now, dependencies and build times are the main issues about Rust. If possible, removing not necessary dependencies should be a priority. Arraymap for example isn't really necessary.

The dependency to the cv-core crate is also bugging me. People just wanting to compute P3P should not have to pull cv-core. I'm not sure I understand the advantages of bringing the keyPointWorldMatch and other types from cv-core. If we want to make it clear that types are based on nalgebra, ok but let's just make this constraint in the types.

Regarding those dependencies to nalgebra and sample-consensus, I also think that those should be clear in the crate dependencies, not as indirect dependencies of the cv-core crate. Also I'm not a big fan of cv-core being an "umbrella" dependency holder re-exposing nalgebra and others but that's your choice.

Regarding documentation

I don't understand why using google chart apis inside the documentation? The latex rendering was already showing up in rust doc. You just need an environment variable set, such as RUSTDOCFLAGS="--html-in-header katex-header.html" cargo doc --no-deps --open

The documentation of the function compute_poses_nordberg does not match also. There is no more world_3d_points and bearing_vectors.

Regarding namespacing

Having a p3p method inside the p3p::nordberg namespace doesn't feel right. It's almost always preferable to call things in a qualified way in my opinion. NordbergEstimator for example should in my opinion be nordberg::estimator.

Regarding the actual code

You are keeping the p3p computations done with rotation and translation matrices and transforming then at the end into quaternions for iso3 nalgebra type. The author mentions somewhere however (don't remember where right now but I've read it), that if we want quaternions for the rotation there is a better way to write this algorithm. I know that alternative version isn't in OpenMVG rewrite, but maybe it is somewhere in the original code or paper. Otherwise we should probably expose the nalgebra iso3 type based on rotation and translation matrices instead. That's the caller decision to convert them.

I'm not a big fan of the unwrap in estimate function. We can't guaranty that the iterator data have enough elements. It's on the good will of the caller here. So the function should either improve the types so that we can make such guaranties, or potentially return an error.

It seems you're not using rustfmt on tests/consensus.rs? I got a few things moving when saving my file.

Adding CI est great, but I'm wondering why quickcheck tests are ignored? If failing tests happen, we should check them to figure out what was wrong. I'm not sure just preventing it from CI is a good thing.

About sample consensus

The important inclusion in this PR is sample consensus. But as I see it, it actually is implemented on an additionnal struct NordbergEstimator. I don't see then why it has to be inside this crate. In my opinion, sampling should not be a matter of P3P but PNP. I'm also wondering why the sampling is specific to Nordberg implementation. If I'm understanding correctly, it just needs a function from an iterator of keypoint pairs to a vec of poses. That should be independent from the specific nordberg implementation.

In the end I think most of this PR should be moved to another place, probably PNP.

mpizenberg avatar Jan 13 '20 01:01 mpizenberg

Thank you for taking a look at the PR. I definitely see where you are coming from with some of these issues, and I think we can find understanding and compromise overall.

Right now, dependencies and build times are the main issues about Rust. If possible, removing not necessary dependencies should be a priority. Arraymap for example isn't really necessary.

Personally, I don't have build-time related issues with Rust since my typical testing, debugging, and linter cycle can reuse already-built packages. However, I do understand how frustrating it can be when dependencies balloon into the hundreds when you aren't even using features that ultimately use the underlying crates.

Firstly, I do not think that arraymap contributes significantly to the issue, as it itself has no dependencies. However, I have identified two major contributors to added dependencies: nalgebra and derive_more.

nalgebra was already a dependency of the p3p crate and is a necessary evil. It takes on the order of 30 seconds to build and, with default features off, it still contains most of its code, even if you really don't use any of it. However, it is the de-facto linear algebra library in Rust. Because of this, we can expect that downstream consumers will already have nalgebra and that it likely wont make their build times worse.

derive_more, a dependency introduced by cv-core, adds dependencies related to proc macros/macros 2.0. Most of these dependencies would be shared by other crates that utilize proc macros. For instance, serde, which seems to be in almost every Rust application, heavily relies on proc macros for its derive feature, which is commonly used. Additionally, derive_more does not contribute to binary size in any way, since it builds purely to provide compile-time derives, which is important for embedded consumers. Nonetheless, it pulls in several dependencies only for the purpose of simplifying the code base. While I have traditionally erred on the side of making code as readable as humanly possible, I understand that this may be an overstep. I think a good compromise here could be to remove derive_more from cv-core and to cut down on the number of derives and trait impls.

The dependency to the cv-core crate is also bugging me. People just wanting to compute P3P should not have to pull cv-core. I'm not sure I understand the advantages of bringing the keyPointWorldMatch and other types from cv-core. If we want to make it clear that types are based on nalgebra, ok but let's just make this constraint in the types.

My goal with cv-core is to have a very lightweight crate that just describes the abstractions and primitives that are fundamental to computer vision. cv-core pulls in the sample-consensus crate, which is #![no_std], has no code in it (only traits), and builds instantly. It does this because cv-core contains different kinds of models that have residual functions. For instance, the essential matrix is a model I am in the process of adding right now. This is important to have in a centralized location so that if one computer vision crate estimates an essential matrix, it now automatically plugs into the sample-consensus crate and allows you to use any sample consensus algorithm to find a good fit to the underlying data.

Note that the levenberg-marquardt crate is not included in cv-core and neither will p3p or pnp. These crates simply depend on cv-core to share types with each other, which lets them be used seamlessly together. At least, that is my vision for the future. In essence, cv-core only contains what is needed to make all the different algorithms talk to each other without glue code and using the same types. It also uses types to convey information, such as the kind of pose something is (relative between cameras, world to camera, camera to world, etc).

sample-consensus provides abstractions that are relevant outside of just computer vision, but by including it in cv-core, I am trying to make it the primary means of interaction between models with residuals, estimators that produce 0, 1, or more solutions from some data, and consensus algorithms that use estimators to fit data and discriminate between inliers and outliers. sample-consensus is based heavily on the C++ API in TheiaSfM. From experience, I have found that this is particularly useful in allowing me to swap out different algorithms. TheiaSfM has several instances of this kind of thing throughout its codebase. Even OpenCV has things like the FeatureDetector, which enables the partial abstraction over feature detection and description algorithms. I would like to get this abstraction capability using Rust's powerful type system without the 30-minute build time of OpenCV.

Just to make a point, this is the entire definition of pnp. By integrating cv-core into this crate, I have made the entire definition of pnp 55 lines, including the signature. It also fully abstracts over all sample consensus algorithms and estimators (not just lambda twist), provided that they speak the same language: KeyPointWorldMatch and WorldPose. I could publish it now, but the unit tests depend on p3p because it is the only estimator of its kind currently. While it could just use nalgebra types, I believe that makes the code less readable, more error prone, and decreases the expressiveness.

I am absolutely willing to compromise on this if need be. I can always create a different crate that integrates p3p with cv-core. I am hoping that you understand my reasoning though. However, it is important to note that in OpenCV things are mostly designed to work together, and I would like to do the same thing. Additionally, crates like rand_core exist in Rust which almost all random number generators depend on (or optionally depend on). There is also a rand crate that collects all the RNGs into one place. I plan to do a similar thing with cv-core. One compromise could also be to just make cv-core an optional dependency that exposes an Estimator struct.

Regarding those dependencies to nalgebra and sample-consensus, I also think that those should be clear in the crate dependencies, not as indirect dependencies of the cv-core crate. Also I'm not a big fan of cv-core being an "umbrella" dependency holder re-exposing nalgebra and others but that's your choice.

Most of what I said above applies to this as well. I will also add that cv-core only re-exports nalgebra and sample-consensus to ensure that downstream crates utilize the same versions of those crates. For instance, if you pull in version 0.6 of nalgebra, but cv-core uses 0.7, you can still use 0.6, but when you talk to cv-core you must use cv_core::nalgebra or ensure that your version of nalgebra matches. If you dislike this, I don't have any attachment to it. I have seen other crates ensure crates that are part of their public API are re-exported, but it is not universal. I would be more than happy to remove it, but I just want you to understand my reasoning.

I don't understand why using google chart apis inside the documentation? The latex rendering was already showing up in rust doc. You just need an environment variable set...

Let me just make sure that works in https://docs.rs/ first. If it does, then I will fix that. My bad, I didn't even see that environment variable.

The documentation of the function compute_poses_nordberg does not match also. There is no more world_3d_points and bearing_vectors.

I will fix that as soon as I get back to this PR.

...NordbergEstimator for example should in my opinion be nordberg::estimator.

I totally agree. nordberg::Estimator since it is a struct, but that does make more sense.

You are keeping the p3p computations done with rotation and translation matrices and transforming then at the end into quaternions for iso3 nalgebra type. The author mentions somewhere however (don't remember where right now but I've read it), that if we want quaternions for the rotation there is a better way to write this algorithm. I know that alternative version isn't in OpenMVG rewrite, but maybe it is somewhere in the original code or paper. Otherwise we should probably expose the nalgebra iso3 type based on rotation and translation matrices instead. That's the caller decision to convert them.

I am starting to question my decision to standardize on quaternions for the isometry's rotation, but I am not totally sure yet. The main reason I did this was because Levenberg-Marquardt seems to work well with quaternions since they don't have degenerate configurations. However, computing residuals and performing point projection is more expensive with quaternions. Additionally, now that I am writing the code to extract pose from the essential matrix, I am finding that the natural representation in that scenario seems to be the 3x3 matrix. It seems like it is more useful to temporarily convert the rotation matrix to a quaternion and then back again, but I still don't think the rust-cv ecosystem is mature enough for me to say which is more appropriate based on experience.

If you want to weigh in on that, let me know. I think if we can expose separate APIs to estimate the quaternion form and the matrix form of the rotation, I think that would be excellent. That should be done in a separate PR (with an issue ticket) though to avoid scope-creep.

I'm not a big fan of the unwrap in estimate function. We can't guaranty that the iterator data have enough elements. It's on the good will of the caller here. So the function should either improve the types so that we can make such guaranties, or potentially return an error.

Typically if the programmer made a mistake, that should panic. In this case, the programmer should never run the estimator with three points. We could return no solutions, but if we do that then the code will run and not work, but there will be no error. I think that it is reasonable to panic if less than the minimum number of data points are provided since the developer has a bug and it should notify them. I think that using unwrap() was probably not good though. I will change all the unwrap() to expect() with actual error messages as soon as I return to this PR.

It seems you're not using rustfmt on tests/consensus.rs? I got a few things moving when saving my file.

Whoops! Sometimes VS Code and RLS fails to format some files for me. Thanks for letting me know. I will be more diligent in the future to ensure I do not commit unformatted code. We can also add a CI test to ensure formatting compliance.

Adding CI est great, but I'm wondering why quickcheck tests are ignored? If failing tests happen, we should check them to figure out what was wrong. I'm not sure just preventing it from CI is a good thing.

I see it as moreso acknowledging that the test does not prevent all degenerate estimation cases. The test does execute sucessfully sometimes, but quickcheck uses actual random data and occasionally fails. I don't want something that fails occasionally to be in a CI test because that would mask other failures (for instance, if the 4 point test of the estimator found the wrong solution out of the 4 possible).

The important inclusion in this PR is sample consensus. But as I see it, it actually is implemented on an additionnal struct NordbergEstimator. I don't see then why it has to be inside this crate. In my opinion, sampling should not be a matter of P3P but PNP. I'm also wondering why the sampling is specific to Nordberg implementation. If I'm understanding correctly, it just needs a function from an iterator of keypoint pairs to a vec of poses. That should be independent from the specific nordberg implementation. In the end I think most of this PR should be moved to another place, probably PNP.

This crate itself is not providing any sampling related functionalities, but it does need to provide an implementation of the Estimator trait from sample-consensus to allow it to seamlessly integrate with sampling algorithms. You probably already get the reasoning based on my above comments. My feeling is that this could be separated into a different crate, but the pnp crate abstracts over all estimators of absolute pose, so I don't think any absolute pose estimation algorithm should go into it (except for unit tests).

Phew, that took a while to write!

Alright, thank you for reading through all of that. I hope you understand where this PR is coming from now. I will work to make comprimises here so that the Rust computer vision ecosystem is as strong as it can be. I think my mind has been changed on some issues, and I will work to bring the rest of the rust-cv crates in line where appropriate. Please let me know if you change your stance on anything, if you have additional clarification, make a final decision, or if you want me to fix something else in the PR. I will be available.

vadixidav avatar Jan 13 '20 10:01 vadixidav

I have identified two major contributors to added dependencies: nalgebra and derive_more

I agree, nalgebra is necessary if we are to use it consistently over the cv-core org. You know better than me for derive_more. I just wanted to emphasize that dependencies are expensive, and not only incremental compilation should be taken into account.

My goal with cv-core is to have a very lightweight crate that just describes the abstractions and primitives that are fundamental to computer vision.

Ok I understand your vision of a coherent set of libraries. To be honnest, I've never used OpenMVG or Theia to implement an indirect VO or SLAM pipeline. I'm all very new to all this, visual-odometry-rs (which I should rename to show it's direct only) was really my first encounter with all this world of CV stuff, so you probably have a better overview of what's useful for cv-core.

I believe the main difference between our approaches is that you want to have a well integrated of set crates by prioritizing homogeneity while I'm prioritizing modularity and independence. For the near future, and since you're the main driver currently for cv org I understand your point of view. It's just that it prevents other organizations to use P3P for their own purposes without relying on cv-core. And I have a tendency to prefer diversity, or not making it harder at least.

I will also add that cv-core only re-exports nalgebra and sample-consensus to ensure that downstream crates utilize the same versions of those crates.

Ok, that's a good argument. Version issues are very annoying.

Let me just make sure that works in https://docs.rs/ first. If it does, then I will fix that. My bad, I didn't even see that environment variable.

Yep that's the reason for this line in cargo.toml: rustdoc-args = [ "--html-in-header", "katex-header.html" ].

I am starting to question my decision to standardize on quaternions for the isometry's rotation

Well I don't know enough of the rest of the cv org to argue here. The point was just on the fact that the computation returns rotation and translation matrices, and so converting it to quaternions is an additional step which degrades performance and precision if the caller wants matrices instead. But this is tied to the coherent vision, and I understand where it is coming from to have a coherent interface leveraging cv-core types.

Typically if the programmer made a mistake, that should panic.

Let me know if I'm mistaken but the wrong number of available samples might arise from dynamic generation of samples in an insufficient number. So the question isn't really if the programmer made a mistake or not. This code will panic, eventually. So the question really is should we force the caller handle possible issues with error handling like a result type. Or do we assume the caller will always check before calling this that there is a sufficient amount of point. In my programming experience they won't. So we can decide to keep the panic behavior if it has performance or API advantages, but the name of the function should really tell it like estimate_unsafe or similar ("unsafe" isn't really great since it has other meanings in Rust).

quickcheck uses actual random data

Yes that's actually the purpose. We should take the time to understand failures and reflect it in the tests. But I can understand this may be annoying for now. If we keep it deactivated, an issue should be left unresolved about this at least.

it does need to provide an implementation of the Estimator trait from sample-consensus to allow it to seamlessly integrate with sampling algorithms.

Right. I'm still not sure this has to be part of P3P. If it doesn't fit PNP, maybe a p3p-estimator crate is better suited. Then I think both our visions could be satisfied. We need to change P3P to directly use nalgebra in its types such that it interops cleanly with p3p-estimator that actually use cv-core types. But in this way, P3P remains fairly independent of cv-core. What do you think? In the end it's your decision since I don't want to prevent you from making progress.

mpizenberg avatar Jan 13 '20 13:01 mpizenberg