4K4D icon indicating copy to clipboard operation
4K4D copied to clipboard

How to get a 3d object from a new video?

Open adrida opened this issue 2 years ago • 5 comments

Hello, thank you for the great work. I was wondering if it was possible with the current implementation to pass as input 2/3 videos from a new scene and get a 3d animation file that could be rendered using Blender for example?

adrida avatar Feb 06 '24 06:02 adrida

Hi @adrida thanks for the interest. Currently, 4K4D focuses more on the novel view synthesis side and uses a different rendering pipeline than that of the traditional triangle meshes. Thus for creating 3D assets directly (with such a small view count), you should consider looking into other 3D/4D neural reconstruction methods that focus on surface quality and use human priors (with SDF field or directly optimizes meshes), like AniSDF or Relightable Avatar

dendenxu avatar Feb 06 '24 08:02 dendenxu

I see, thanks a lot for your answer and for the references I will check them out. Any idea on projects that would export 3d animated assets of large scenes with multiple persons? I could have more video angles as input if needed and I don't necessarily need to be able to edit the 3d asset. I am looking to be able just "replay" the scene from different angles and navigate in the 3d space (a bit like static nerf approaches where you can export the 3d reconstruction of an appartement in VR and be in it).

adrida avatar Feb 06 '24 08:02 adrida

Any idea on projects that would export 3d animated assets of large scenes with multiple persons?

Ah, for multi-person reconstruction, I recommend checking out CloseMocap and MultiNB (the first news of EasyMocap).

I could have more video angles as input if needed and I don't necessarily need to be able to edit the 3d asset. I am looking to be able just "replay" the scene from different angles and navigate in the 3d space (a bit like static nerf approaches where you can export the 3d reconstruction of an appartement in VR and be in it).

For replaying the reconstruction, there are generally two approaches:

  1. Support PCVR in the specific rendering algorithm (like a connector with Unity VR or just natively supports communication with VR devices). An example is the VR-NeRF method you mentioned. We're planning on adding this functionality to EasyVolcap, on which 4K4D is built.
  2. Export the reconstructed sequence to a renderable format and replay it (like exporting a mesh sequence). This might come at some significant quality loss.

dendenxu avatar Feb 06 '24 08:02 dendenxu

I see, thanks a lot for sharing those projects I will take a look.

Any way I could help for the VR feature you are planning on adding to EasyVolcap? Your work is very inspiring, and I see a lot of great potential applications especially in AR/MR. If we could reconstruct in real time a dynamic scene and render it through a Meta Quest3 or an Apple Vision Pro, it would open-up limitless possibilities.

I am not an expert in 3d reconstruction but from the few papers/surveys I have been reading, I feel like the state of the art is very close to achieving such a thing.

adrida avatar Feb 06 '24 09:02 adrida

Indeed, the field is advancing fast and I also think that future is not far away.

Any way I could help for the VR feature you are planning on adding to EasyVolcap?

I couldn't think of any specific problem to solve as of right now, but we are always looking forward to a PR of any kind. Feel free to contribute!

dendenxu avatar Feb 07 '24 12:02 dendenxu