Chen Wang
Chen Wang
The short answer is that the rotation and translation are independent in the representation of Lie Group, and look not independent in Lie Algebra because their relationship is (y is...
Some points to fix conflicts before further review and merge: - new functions like `bdot` can be implemented via 'torch.einsum'. The new functions are not used by LieTensor, thus cannot...
@zeroAska Yes, it is supported. You may do something like ``` opt1 = SGD(net1.parameters()) opt2 = LM(net2, strategy=strategy) for i in range(epochs): opt1.step() for j in range(iterations): opt2.step(input) ``` Bi-level...
You can use `net.module1.parameters()` and `net.module2` to achieve this.
For initialization, it has no difference from a neural network, you may perform in-place value assignment for module parameters, e.g. `net.module.weight1.data.fill_(value)` before solving the problem. More information is [here](https://stackoverflow.com/questions/49433936/how-do-i-initialize-weights-in-pytorch). For...
We suggest only retaining the gradients from the last iteration for the inner optimization, as it will be more efficient and equivalent to back-propagating through the inner iterative optimization. More...
> We suggest only retaining the gradients from the last iteration for the inner optimization, as it will be more efficient and equivalent to back-propagating through the inner iterative optimization....
They don't have to have the same loss. Another example having the different loss functions is [this paper](https://arxiv.org/pdf/2302.11434.pdf).
> > We suggest only retaining the gradients from the last iteration for the inner optimization, as it will be more efficient and equivalent to back-propagating through the inner iterative...
Hi @aerogjy Are we still going to add this feature? Could you provide more mathematical details for this issue?