GNeuVox
GNeuVox copied to clipboard
Pre-training
Hi, The repository doesn't have code for pre-training, only fine tuning code is released. Would you have some pointers on how to implement the pre-training pipeline with a custom dataset?
Thank you for your attention. This has a related question.
Thanks for the reference issue. One thing that I wanted to clarify - a given batch of rays (all 6 patches) are from the same image right? and then you iterate over images of different subjects randomly throughout the training process?
also for pre-training how is the latent embedding Z initialized for each new subject. Is it one hot encoding? or linear encoding, like 0,1,2,.. , or random vectors?
Hello, as you mentioned, a given batch of rays (all 6 patches) comes from an image, and during pre-training, the rays are randomly selected from images of different objects. The latent embedding Z is a trainable parameter initialized to zero.
Thanks for the response! I had 2 more followup questions for pre-training -
- For every different object in the dataset, is a different Z vector (initialized to 0) used?
- For every different object, is a separate individual voxel is used while pre-training? The codebase doesn't have any mentions for per subject individual voxels hence asked.