GNeuVox icon indicating copy to clipboard operation
GNeuVox copied to clipboard

Pre-training

Open ctalegaonkar opened this issue 1 year ago • 5 comments

Hi, The repository doesn't have code for pre-training, only fine tuning code is released. Would you have some pointers on how to implement the pre-training pipeline with a custom dataset?

ctalegaonkar avatar Jul 14 '23 18:07 ctalegaonkar

Thank you for your attention. This has a related question.

taoranyi avatar Jul 17 '23 08:07 taoranyi

Thanks for the reference issue. One thing that I wanted to clarify - a given batch of rays (all 6 patches) are from the same image right? and then you iterate over images of different subjects randomly throughout the training process?

ctalegaonkar avatar Jul 17 '23 19:07 ctalegaonkar

also for pre-training how is the latent embedding Z initialized for each new subject. Is it one hot encoding? or linear encoding, like 0,1,2,.. , or random vectors?

ctalegaonkar avatar Jul 20 '23 23:07 ctalegaonkar

Hello, as you mentioned, a given batch of rays (all 6 patches) comes from an image, and during pre-training, the rays are randomly selected from images of different objects. The latent embedding Z is a trainable parameter initialized to zero.

taoranyi avatar Jul 25 '23 08:07 taoranyi

Thanks for the response! I had 2 more followup questions for pre-training -

  1. For every different object in the dataset, is a different Z vector (initialized to 0) used?
  2. For every different object, is a separate individual voxel is used while pre-training? The codebase doesn't have any mentions for per subject individual voxels hence asked.

ctalegaonkar avatar Jul 25 '23 22:07 ctalegaonkar