small_gicp icon indicating copy to clipboard operation
small_gicp copied to clipboard

The linearize function in plane_icp_factor.hpp, the Jacobian matrix should be a 1 by 6 matrix.

Open zhao-zhibo opened this issue 10 months ago • 4 comments

Thank you very much for your open source work. Describe the bug The correct way to write it should be as follows:

    Eigen::Matrix<double, 1, 6> J = Eigen::Matrix<double, 1, 6>::Zero();
    const Eigen::Vector3d p_source = traits::point(source, source_index).template head<3>();
    J.block<1, 3>(0, 0) = -target_normal.template head<3>().transpose() * T.linear() * skew(p_source);
    J.block<1, 3>(0, 3) = -target_normal.template head<3>().transpose() * T.linear();

The current writing method is a 4 by 6 matrix construction form:

    Eigen::Matrix<double, 4, 6> J = Eigen::Matrix<double, 4, 6>::Zero();
    J.block<3, 3>(0, 0) = target_normal.template head<3>().asDiagonal() * T.linear() * skew(traits::point(source, source_index).template head<3>());
    J.block<3, 3>(0, 3) = target_normal.template head<3>().asDiagonal() * (-T.linear());

To Reproduce Steps to reproduce the behavior:

  1. Build the package
  2. Run '...'

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):

  • OS: [e.g. Ubuntu]
  • Version [e.g. 22.04]
  • Language [C++ / Python]

Additional context Add any other context about the problem here.

zhao-zhibo avatar Apr 16 '25 03:04 zhao-zhibo

Thank you very much for your excellent work. I also printed the J matrix and the H matrix and found that the third row of the printed 1x6 J matrix and the 4x6 J matrix is ​​sometimes consistent, and sometimes there is no consistent part in each row of the two J matrices. The H matrix is ​​also sometimes very different. Why does the algorithm use the 4x6 J matrix for calculation? The code is as follows:

    Eigen::Matrix<double, 4, 6> J = Eigen::Matrix<double, 4, 6>::Zero();
    J.block<3, 3>(0, 0) = target_normal.template head<3>().asDiagonal() * T.linear() * skew(traits::point(source, source_index).template head<3>());
    J.block<3, 3>(0, 3) = target_normal.template head<3>().asDiagonal() * (-T.linear());

    // 使用1乘6的向量
    Eigen::Matrix<double, 1, 6> JMat = Eigen::Matrix<double, 1, 6>::Zero();
    JMat.block<1, 3>(0, 0) = target_normal.template head<3>().transpose() * T.linear() * skew(traits::point(source, source_index).template head<3>());
    JMat.block<1, 3>(0, 3) = target_normal.template head<3>().transpose() * (-T.linear());

    *H = J.transpose() * J;
    Eigen::Matrix<double, 6, 6> HMat = JMat.transpose() * JMat;
    *b = J.transpose() * err;
    *e = 0.5 * err.squaredNorm();

    // 定义输出格式:精度为4,元素间用", "分隔,每行结束换行,不加前后缀
    Eigen::IOFormat CleanFmt(4, 0, ", ", "\n", "", "");    
    // 使用临界区保证日志串行输出
    #pragma omp critical
    {
      std::cout << "-----------------------------------" << std::endl;
      std::cout << "J (4x6):\n" << J.format(CleanFmt) << std::endl;
      std::cout << "JMat (1x6):\n" << JMat.format(CleanFmt) << std::endl;
      std::cout << "H (J^T * J):\n" << (*H).format(CleanFmt) << std::endl;
      std::cout << "HMat (JMat^T * JMat):\n" << HMat.format(CleanFmt) << std::endl;
    }

The effect of the printed log:

 -----------------------------------
J (4x6):
  -1.112,   0.5172,    8.918,  -0.3844,   0.4432, -0.07364
   1.162,  -0.6312,   -3.766,   0.5868,   0.5245,   0.0931
   2.023,   -1.213,   0.3694, -0.02546, 0.002366,   0.1472
       0,        0,        0,        0,        0,        0
JMat (1x6):
 2.072, -1.327,  5.521,  0.177,   0.97, 0.1666
H (J^T * J):
  6.677,  -3.761,  -13.55,   1.058,  0.1212,  0.4877
 -3.761,   2.136,   6.542, -0.5384, -0.1047, -0.2753
 -13.55,   6.542,   93.85,  -5.648,   1.978,  -0.953
  1.058, -0.5384,  -5.648,  0.4928,  0.1373, 0.07919
 0.1212, -0.1047,   1.978,  0.1373,  0.4715, 0.01654
 0.4877, -0.2753,  -0.953, 0.07919, 0.01654, 0.03575
HMat (JMat^T * JMat):
  4.294,  -2.749,   11.44,  0.3667,    2.01,  0.3453
 -2.749,    1.76,  -7.324, -0.2347,  -1.287, -0.2211
  11.44,  -7.324,   30.48,   0.977,   5.356,  0.9201
 0.3667, -0.2347,   0.977, 0.03131,  0.1716, 0.02949
   2.01,  -1.287,   5.356,  0.1716,  0.9409,  0.1616
 0.3453, -0.2211,  0.9201, 0.02949,  0.1616, 0.02777
-----------------------------------
J (4x6):
   5.071,   0.5132,   -23.38,  -0.3761,   0.4337, -0.07206
   5.049,   0.8168,   -36.43,  -0.5933,  -0.5302, -0.09413
  -9.265,  0.09393,   -1.605, -0.02598, 0.002414,   0.1501
       0,        0,        0,        0,        0,        0
JMat (1x6):
  0.8561,    1.424,   -61.42,  -0.9954, -0.09416, -0.01606
H (J^T * J):
     137,    5.857,   -287.7,   -4.663,  -0.5005,   -2.232
   5.857,   0.9394,   -41.91,  -0.6801,  -0.2103, -0.09976
  -287.7,   -41.91,     1876,    30.45,     9.17,    4.873
  -4.663,  -0.6801,    30.45,   0.4942,   0.1514,  0.07905
 -0.5005,  -0.2103,     9.17,   0.1514,   0.4692,  0.01903
  -2.232, -0.09976,    4.873,  0.07905,  0.01903,  0.03659
HMat (JMat^T * JMat):
   0.7329,     1.219,    -52.58,   -0.8522,  -0.08061,  -0.01375
    1.219,     2.028,    -87.45,    -1.417,   -0.1341,  -0.02287
   -52.58,    -87.45,      3772,     61.14,     5.783,    0.9862
  -0.8522,    -1.417,     61.14,    0.9909,   0.09372,   0.01598
 -0.08061,   -0.1341,     5.783,   0.09372,  0.008865,  0.001512
 -0.01375,  -0.02287,    0.9862,   0.01598,  0.001512, 0.0002579
-----------------------------------

zhao-zhibo avatar Apr 17 '25 03:04 zhao-zhibo

I think your 1x6 Jacobian corresponds to the error function (err.x + err.y + err.z)^2 while the 4x6 Jacobian corresponds to err.x^2 + err.y^2 + err.z^2 (the last row is zero-padding for SIMD optimization). The latter has less non-linearity and is suitable for Gauss-Newton.

koide3 avatar Apr 18 '25 00:04 koide3

Thank you for your reply. Is there any relevant information I can look at about the calculation of the 4×6 Jacobian matrix you mentioned? @koide3

zhao-zhibo avatar Apr 19 '25 03:04 zhao-zhibo

I recommend taking a look at Eqs (2) - (16) in the following tutorial that nicely explains the derivation of Gauss-Newton. http://www2.informatik.uni-freiburg.de/~stachnis/pdf/grisetti10titsmag.pdf

koide3 avatar Apr 22 '25 07:04 koide3