pytti-core
pytti-core copied to clipboard
it should be trivial for users to run a second-pass over the frames they just generated. k*N passes over the sequence of X/(k*N) steps per frame might even give a...
* concrete use case: applying special effects to video that is accompanied by photogrammetric measurements * experimental use case: applying a depth field as a guiding prior (e.g. weighted weakly,...
* [x] eliminated superflous TargetImage constructor * [ ] cleanup debugging statements * [ ] wherever possible, init tensors on devices instead of transferring
for my audio animation experiments, and i think also as a general purpose tool for animations that evolve, the current config syntax doesn't work very well: - scenes are of...
https://github.com/cheind/pytorch-blender stream from blender rendering into pytti's input stream ... more broadly, I wonder what it'd take to make it so you could just pipe input into pytti? like `ffmpeg...
we only get learning rate. I want to give user ability to change the underlying optimizer. doesn't always have to be adamW, and I want to know if different optimizers...
presumably we're already computing depth for the 3D warps: we should get the loss for free. I suspect we are unnecessarily re-estimating depth here
We had discussed at one point adding depth map export as an option, so people could use them for blender animation and such
I think it would be helpful to add the model sizes next to the models, so that we could know how big each is while selecting and get a better...