multinerf
multinerf copied to clipboard
I Use my own dataset to train rawnerf, the color is wrong, and it is blur,What's the problem
I use my own raw data set for training, and the rendered image is not only blurred, but also very strange in color. But it's OK to use the data provided by the author.
The process is as follows:
(1)bash scripts/local_colmap_and_resize.sh ${DATA_DIR}
(2)nohup python -u -m train \
--gin_configs=configs/llff_raw.gin \
--gin_bindings="Config.data_dir = '${DATA_DIR}'" \
--gin_bindings="Config.checkpoint_dir = '${CHECKPOINT_DIR}'" \
--gin_bindings="Config.max_steps = 990000" \
--gin_bindings="Config.batch_size = 8192" \
--gin_bindings="Config.lr_init = 0.001" \
--logtostderr &> $CHECKPOINT_DIR/log.txt &
(3)python -m render \
--gin_configs=configs/llff_raw_test.gin \
--gin_bindings="Config.data_dir = '${DATA_DIR}'" \
--gin_bindings="Config.checkpoint_dir = '${CHECKPOINT_DIR}'" \
--gin_bindings="Config.render_path = True" \
--gin_bindings="Config.render_path_frames = 100" \
--gin_bindings="Config.render_spline_keyframes = '${DATA_DIR}/images/'" \
--gin_bindings="Config.render_dir = '${CHECKPOINT_DIR}/render/'" \
--gin_bindings="Config.render_video_fps = 2" \
--gin_bindings="Config.render_chunk_size = 1024" \
--logtostderr
I think the script should be OK. Maybe the data is wrong, but I'm not sure which link of the data is wrong.
Is the "ColorMatrix2" parameter of the json file arranged in the wrong way? Or is the pose calculated by colmap(bash scripts/local_colmap_and_resize.sh ${DATA_DIR})wrong?What is the problem?
The image I rendered is as follows, blurred, and the color is strange:
The original image(rgb for see) looks like this:
The json data corresponding to raw data is as follows:
[
{
"SourceFile": "1003-0001-4757501.dng",
"BitsPerSample": 16,
"CFAPattern2": "1 0 2 1",
"CFARepeatPatternDim": "2 2",
"ColorMatrix2": "1.4521484 -0.78808594 -0.14453125 -0.42089844 1.46875 -0.017578125 -0.045898438 0.25585938 0.56933594",
"AsShotNeutral": "0.5527344 1.0 0.49414062",
"BlackLevel": 64,
"WhiteLevel": 1023,
"ExposureTime": "1/100.000",
"ISO": 3200,
"ShutterSpeed": "1/100.000",
"FocalLength": "5.0 mm",
"Aperture": 1.8
}
]
What is the problem with the data construction process?
The image I rendered is as follows, blurred, and the color is strange:
@ouyangjiacs I read the code. One reason is that the test script generates a surround rendered video by default. If your own training data is not the kind of shooting around an object, the generated perspective and training data deviate greatly, so the result is not good
I have modified the default render pose to keep it consistent with my training pose.Is it possible that the raw data format is inconsistent with the official data format?