sobfu icon indicating copy to clipboard operation
sobfu copied to clipboard

Convergence criteria not met...

Open Algomorph opened this issue 5 years ago • 6 comments

@dgrzech

Thanks for uploading this, good work, man!

I've been trying to reproduce Killing/SobolevFusion results for over a year now. I have an implementation over InfiniTAM

Here is canonical in the first 65 frames of the Snoopy sequence: https://www.youtube.com/watch?v=0C2Djk4jo4I Last frame: image

Tracking kinda sucks and drifts a lot, that part is being worked on. I've been capping the iterations at 200 frames and using voxel hashing to speed up the processing.

What really escapes me, though, is that, according to Mira Slavcheva herself in her emails to me, non-rigid alignment should converge within 30-150 iterations. Max-warp threshold was set to "0.1 mm" for KillingFusion (seems more like 0.1 voxel IMHO).

It seems like you've run into the same issue, i.e. your optimizations here for Snoopy are running (and are capped) to 2048 iterations.

  1. You're seeing oscillations in warps for a small percentage of voxels across the voxel grid, no?
  2. What is your take on this, is the whole convergence story just a story, i.e. convergence would work only if setting the max warp update threshold unreasonably high, resulting in bad reconstruction quality / high drift?

Thanks!

Algomorph avatar Feb 08 '19 21:02 Algomorph

P.S. ultimately related to #1

Algomorph avatar Feb 08 '19 21:02 Algomorph

hi!

your model in the canonical pose looks somewhat better than mine, good job.

in her e-mails Mira told me the same number of iterations till convergence. indeed, when running my code, warp for some of the voxels never really converges. the most troublesome area for me seems to be the voxels at the sdf truncation boundary.

i do think that 30-150 gradient descent iterations to guarantee convergence sounds very optimistic. i also don't really know why i wasn't able to exactly reproduce the results from the paper and i'd be really happy to learn what the problem is and fix it

dgrzech avatar Feb 10 '19 17:02 dgrzech

Sure, I'm available for any correspondence regarding this now or in the future. TBH, Mira didn't sound completely sure in her emails about what was going on in her code, and some of the statements she made seem contradictory to the math given in the papers.

I'll take a deeper debug through your code later on and see if I notice anything that doesn't mesh with what I have. Maybe we could improve each-other's code. I wouldn't say that my result is better -- I used the masked version of the Snoopy sequence (masks as provided from Mira's website). Also, for some reason, the InfiniTAM rigid alignment / tracking that I used instead of Mira's SDF-2-SDF algorithm fails horribly beyond the first hundred frames (hence the result I give is only to frame 65).

I'm working on an alternative hierarchical schema and some more advanced filtering technique (than conventional bilateral filter) for the input data, the result is open source in one of my repos, and I'll let you know if I ever get anywhere with that.

Algomorph avatar Feb 11 '19 19:02 Algomorph

Hi @Algomorph,

Thanks a lot for sharing the results. You got a very nice reconstruction! I am trying to re-implement SobolevFusion on InfiniTAM as well, can I ask how many iterations and the step size for this reconstruction? The screenshots are my results after 65 frames and I don't get as good result as yours (I am using 1000 iterations per frame and step size 0.05). I haven't got the sobolev gradient flow implemented so it's just data term and Tikhonov term in L2 space... any comments will be helpful...

Without mask: image

With mask: image

ziruiw-dev avatar Mar 21 '19 18:03 ziruiw-dev

@Ryan-Zirui-Wang , my result was with tikhonov term and sobolev smoothing on and I believe the specific one you're looking ad was done on the CPU with 200 iterations and step size 0.1 and with voxel block hashing. The tikhonov smoothing weight I believe was set to 0.2.

I propose for you to grab my code directly from https://github.com/Algomorph/InfiniTAM by forking & branching and start stripping off dependencies that aren't 100% necessary, like VTK, so you can build it easily. there is a lot of code I used for analysis that should simply be moved to a separate repository.

After this you can merge your code with my implementation (if necessary) to combine your effort with mine. As you know, I'm working on a separate codebase right now, but I can be responsive if you choose to go this route and any issues arise.

P.S. my implementation is in the branch feature/DynamicFusion. I've started on the CUDA implementation before I halted and transitioned to my newer alternative experiment.

Algomorph avatar Mar 21 '19 18:03 Algomorph

@Algomorph thanks for the input. I'll try your InfiniTAM branch and see what is different

ziruiw-dev avatar Mar 22 '19 11:03 ziruiw-dev