jaxfg icon indicating copy to clipboard operation
jaxfg copied to clipboard

About the performance compare between other graph optimization framework.

Open wystephen opened this issue 2 years ago • 4 comments

Hi, is there any comparison of performance between this library and others( such as ceres solver, g2o, gtsam)? And, since the jaxfg is based on Jax, can the jaxfg use cuda to significantly speed up optimization speed.?

wystephen avatar Mar 04 '22 08:03 wystephen

Sorry for the delay in getting back here!

My general experience is that jaxfg will usually be slower than optimizers written in C++, particularly if you factor in JIT compilation overhead. You should see CUDA speedups if you're using the conjugate gradient solver, particularly when you want to solve many MAP inference problems in parallel, but in practice CPU-based Cholesky factorizations (eg CHOLMOD) will be noticeably faster than iterative CG solvers for smaller/non-parallel applications.

brentyi avatar Mar 22 '22 22:03 brentyi

I test the performance in my data. To solve a problem with 760x3 parameters [about 760x3x3 dim ] and [760 x 4] factors uses jaxfg. Each iterate costs 0.1 Seconds, but the process of jitting the graph costs 67 seconds. So, this library may not be suitable for solving large-scale problems since the jit time is too long. If the result is abnormal, please contact me.

wystephen avatar Mar 25 '22 14:03 wystephen

Hm, that does sound much slower than comparable experiments we've run. Would appreciate the opportunity to take a look if you're open to sharing your code.

brentyi avatar Mar 25 '22 19:03 brentyi

Hi, this code cannot be provided now. But I found the may reason that the jit time is too long. The main reason is that each factor contains a process like the pre-integration technique during calculating residuals.

wystephen avatar Apr 07 '22 01:04 wystephen