David Marx
David Marx
* [x] eliminated superflous TargetImage constructor * [ ] cleanup debugging statements * [ ] wherever possible, init tensors on devices instead of transferring
https://github.com/cheind/pytorch-blender stream from blender rendering into pytti's input stream ... more broadly, I wonder what it'd take to make it so you could just pipe input into pytti? like `ffmpeg...
we only get learning rate. I want to give user ability to change the underlying optimizer. doesn't always have to be adamW, and I want to know if different optimizers...
presumably we're already computing depth for the 3D warps: we should get the loss for free. I suspect we are unnecessarily re-estimating depth here
currently set to the magic number of 200
still to do: - [ ] Add option to use MADGRAD optimizer (should be default for DIP?) - [x] Add EMA - [ ] Add noise annealing? - [ ]...
see https://github.com/pytti-tools/pytti-core/pull/192#issuecomment-1144303103
https://github.com/lucidrains/ema-pytorch/blob/main/ema_pytorch/ema_pytorch.py Current EMAImage assumes we only need to track EMA on a single tensor, which would preclude EMA-ing a DIP image model (or any other implicit model.
~~corresponding losses and computed weights don't seem to be getting logged either.~~ nevermind, I had display_every set to an arbitrarily high value to suppress this reporting, I think for debugging...