Luis Pineda

Results 44 issues of Luis Pineda

## 🐛 Bug Reported by @joeaortiz When using `RobustCostFunction` we get different results in `Objective.error()` if vectorization is turned on/off. The reason is that when vectorization is off we get...

The current vectorization code computes a vectorized weighted error/jacobian, which means that cost functions of the same type that are using different weight types cannot be grouped together. Since the...

enhancement
on hold

It would be useful for `rand` and `randn` to receive a scale argument to control the magnitude of the sampled groups. From offline discussion, one possibility is to make `scale:...

enhancement
good first issue

_Originally posted by @mhmukadam in https://github.com/facebookresearch/theseus/pull/46#discussion_r795593990_

enhancement

how about ``` if backward_mode == BackwardMode.IMPLICIT or backward_mode == BackwardMode.TRUNCATED ``` so we have less redundant variables? _Originally posted by @mhmukadam in https://github.com/facebookresearch/theseus/pull/81#discussion_r810434614_

good first issue
refactor

## 🐛 Bug `_compute_sdf_data_from_map` returns a variable that has no name, so there is no way to update the sdf data via `theseus_layer.forward()` when this constructor was used to create...

bug
good first issue

Torch autograd's jacobian, used by [LearnableCostFunction](https://github.com/facebookresearch/theseus/blob/main/theseus/core/cost_function.py#L163) computes cross-batch gradients which is undesirable. I haven't seen an out of the box solution, so we might need to do some manual backward()...

performance

Sophus uses rotation-then-translation everywhere, gtsam is the same except for SE2 where it is flipped. We should consider refactoring SE2 in that case, unless there are any consequences (and why...

Would be useful to test (if not already) a scenario where the training is cut off and resumed from a checkpoint, and a scenario where inference is performed by loading...

enhancement
experiments