hyperreel
hyperreel copied to clipboard
training my own video dataset
I am trying to train my own inward facing video dataset with 30 cameras. I am unable to understand the coordinate system of the code in hyperreel. My current poses are all in openCV coordinate system x right, y down and z forward notation. How can I translate of use my transform matrix with your code? I trying building on this using immersive.py code base and immersive_sphere.yml. But my model doesn't seem to learn the volume. I am getting bllurred/ colourful novel views which is really weird. Do you have any suggestion for this? Please tell me a way to use my openCV coordinate system based transform and intrinsics and train the model using your code! Also,what is the most ideal model for my case with inward facing scene/ video data/ openCV coordinates? Please respond.