lietorch icon indicating copy to clipboard operation
lietorch copied to clipboard

A doubt about the influence of the Left-Jacobian

Open robotgradient opened this issue 1 year ago • 0 comments

I am trying to figure out how the left-Jacobian influences in the backpropagation and I think I am missing something.

Let us assume I have $H \in SE(3)$ and I aim to compute the derivative of the function $f(H))=\frac{1}{2}||\textrm{Log}(H)||^2$ w.r.t. $H$.

Given $\Phi = \textrm{Log}(H)$, from your paper, I deduce the gradient is $\frac{D f(H)}{D H} = \frac{\partial (\frac{1}{2}||\Phi||^2)}{\partial \Phi} \frac{D (\textrm{Log}(H))}{DH} = \Phi J_l^{-1}$.

To evaluate the influence of the left-Jacobian, I also compute the gradient for $f(\Phi) = \frac{1}{2}||\Phi||^2$ $\frac{\partial (\frac{1}{2}||\Phi||^2)}{\partial \Phi} = \Phi$

Nevertheless, from the computation I see that the values are the same for both gradients up to a $10^{-5}$ error.

Why are both gradients the same? Should not the $J_l^{-1}$ have an influence?

I share my test code for reproducing my problem

import torch
import copy
from lietorch import SO3, SE3

phi = torch.randn(1000, 6, device='cuda', requires_grad=True)*10.
R = SE3.exp(phi)
phi = copy.deepcopy(R.log().detach())
phi = phi.requires_grad_(True)
phi2 = copy.deepcopy(phi)

## Loss for SE(3) ##
R = SE3.exp(phi)
phi_target = torch.zeros(1, 6, device='cuda', requires_grad=False)
R_target = SE3.exp(phi_target)
dR = R_target[:,None].inv() * R[None,:]
ang = 0.5*(dR.log().pow(2).sum(dim=-1))
loss = ang.sum()
loss.backward()

## Loss for Euclidean ##
ang2 = 0.5*(phi2.pow(2).sum(dim=-1))
loss2 = ang2.sum()
loss2.backward()

####### Metric #########
print('Diff Phi Vs. Phi2: {}'.format((phi-phi2).max()))
print('Diff grad Phi Vs. grad Phi2: {}'.format((torch.abs(phi.grad-phi2.grad)).max()))

print('#####################')

print('min grad Phi Vs. gradPhi: {}'.format((phi-phi.grad).min()))
print('min grad Phi2 Vs. gradPhi2: {}'.format((phi2-phi2.grad).min()))
print('min grad Phi Vs. gradPhi: {}'.format((phi-phi.grad).max()))
print('min grad Phi2 Vs. gradPhi2: {}'.format((phi2-phi2.grad).max()))

My result is:

Diff Phi Vs. Phi2: 0.0
Diff grad Phi Vs. grad Phi2: 0.00012117624282836914
#####################
min grad Phi Vs. gradPhi: -0.00012117624282836914
min grad Phi2 Vs. gradPhi2: 0.0
min grad Phi Vs. gradPhi: 5.537271499633789e-05
min grad Phi2 Vs. gradPhi2: 0.0

robotgradient avatar Aug 20 '22 15:08 robotgradient