pytti-core
pytti-core copied to clipboard
deep image prior
still to do:
- [ ] Add option to use MADGRAD optimizer (should be default for DIP?)
- [x] Add EMA
- [ ] Add noise annealing?
- [ ] May need to add grad scaling (other AMP features?)
re: noise annealing (via https://github.com/LAION-AI/notebooks/blob/main/DALLE2-Prior%2BDeep-Image-Prior.ipynb):
noise_ramp = 1 - min(1, itt / iterations)
net_input_noised = net_input
if input_noise_strength:
phi = min(1, noise_ramp * input_noise_strength) * math.pi / 2
noise = torch.randn_like(net_input)
net_input_noised = net_input * math.cos(phi) + noise * math.sin(phi)
with torch.cuda.amp.autocast():
out = net(net_input_noised * input_scale).float()
current blocker: a lot of dependent code assumes latent is a single tensor. downstream operations on the image representation attempt to call methods on it like "clone" that don't have good analogs with the module dict/list.
I think the solution is instead of adding special cases, generalize the other image reps to expect containers with arbitrarily many image representation components of heterogeneous sizes.
updating EMAImage to operate on parameter dicts/lists could potentially solve a good chunk of this
migrate to branch dip_ema_simple