Extending nerf to support Temporal Coherence for image sequences.
Hi,
I have been looking at your work done on nerf and have some ideas about future funtionality that would be very handy for lightfield video synthesis.
Firstly I was wondering about the feasibility of implementing Temporal Coherence checks as a loss function.
Secondly being able to split image sets into volume tiles to parallelize computation of specific regions of interest.
My third idea would be to wrap these volume tiles in sparse data set checks with Taichi in order to tell if the volume has changed over time and needs to be updated.
I'd assume there would need to be a system in place to manage/write/read these volume containers to disk efficiently and also manage volume frames as they change over time.
Additionally it would be handy to define a known camera rig to optimize against in specific cases.
Let me know what you think!
Ian