sc_depth_pl icon indicating copy to clipboard operation
sc_depth_pl copied to clipboard

How to train on my own datasets without ground truth?

Open hzy995035849 opened this issue 2 years ago • 13 comments

Could you please tell me how to train on my own datasets without ground truth?

"python train.py --config my_config --dataset_dir my_dataset"

It tells me to provide "val.txt" and "my_dataset/depth". Isn't the depth optional for validation?

hzy995035849 avatar Jun 29 '22 08:06 hzy995035849

  1. "val.txt" is required, which indicates the validation sequences.
  2. use "--val_mode photo" if you don't have gt depths for validation

JiawangBian avatar Jun 29 '22 09:06 JiawangBian

  1. "val.txt" is required, which indicates the validation sequences.
  2. use "--val_mode photo" if you don't have gt depths for validation

"python train.py --config my_config --dataset_dir my_dataset --val_mode photo"

and then it displays : GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs 3964 samples found for training 3964 samples found for validatioin LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [6] Validation sanity check: 0%| | 0/5 [00:00<?, ?it/s]Traceback (most recent call last): File "train.py", line 53, in trainer.fit(system) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit self._run(model) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run self._dispatch() File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch self.accelerator.start_training(self) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training self._results = trainer.run_stage() File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage return self._run_train() File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_train self._run_sanity_check(self.lightning_module) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1115, in _run_sanity_check self._evaluation_loop.run() File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(*args, **kwargs) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 111, in advance dataloader_iter, self.current_dataloader_idx, dl_max_batches, self.num_dataloaders File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(*args, **kwargs) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 110, in advance output = self.evaluation_step(batch, batch_idx, dataloader_idx) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 154, in evaluation_step output = self.trainer.accelerator.validation_step(step_kwargs) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 211, in validation_step return self.training_type_plugin.validation_step(*step_kwargs.values()) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 178, in validation_step return self.model.validation_step(*args, **kwargs) File "/home/hzy/Documents/sc_depth_pl-master/SC_DepthV2.py", line 218, in validation_step tgt_depth, ref_depths, poses, poses_inv = self(tgt_img, ref_imgs) File "/home/pubenv/anaconda3/envs/hzy1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'intrinsics'

hzy995035849 avatar Jun 30 '22 00:06 hzy995035849

Could you please tell me how to train on my own datasets without ground truth?

"python train.py --config my_config --dataset_dir my_dataset"

It tells me to provide "val.txt" and "my_dataset/depth". Isn't the depth optional for validation?

i got the same problem, have solved it?

ZhiyiHe1997 avatar Aug 26 '22 06:08 ZhiyiHe1997

I find that photometric loss is not good enough for validation. If you do not have groundtruth, you may not do validation, and simply save the last model.

JiawangBian avatar Aug 26 '22 06:08 JiawangBian

thanks for your quickly reply! actually, i calculate the depth map from COLMAP,due to it’s accurancy is not so good, i do not want to use it for validation

ZhiyiHe1997 avatar Aug 26 '22 06:08 ZhiyiHe1997

thanks for your quickly reply! actually, i calculate the depth map from COLMAP,due to it’s accurancy is not so good, i do not want to use it for validation

Yes, COLMAP depth is not good enough. I will solve these issues recently.

JiawangBian avatar Aug 26 '22 06:08 JiawangBian

I find that photometric loss is not good enough for validation. If you do not have groundtruth, you may not do validation, and simply save the last model.

there is another question i want to ask you , what the meanings of max_depth=200, and min_depth in loss_functions.py, are they the true distance?

ZhiyiHe1997 avatar Aug 26 '22 06:08 ZhiyiHe1997

I find that photometric loss is not good enough for validation. If you do not have groundtruth, you may not do validation, and simply save the last model.

there is another question i want to ask you , what the meanings of max_depth=200, and min_depth in loss_functions.py, are they the true distance?

That is for evaluation in DDAD datasets. It is true distance, like 80m max distance for KITTI.

JiawangBian avatar Aug 26 '22 06:08 JiawangBian

well, i am processing images from uav , so the distance is really very far, i changed the max_depth to 500, but the inference result is blurred? is there any changes i should make or train skills to solve this problem?

ZhiyiHe1997 avatar Aug 26 '22 06:08 ZhiyiHe1997

well, i am processing images from uav , so the distance is really very far, i changed the max_depth to 500, but the inference result is blurred? is there any changes i should make or train skills to solve this problem?

The max_depth is only used for evaluation. It is not used in training. You need to make sure that the adjacent frames have sufficient camera motion (not too large, not to small) for training.

JiawangBian avatar Aug 26 '22 07:08 JiawangBian

well, i am processing images from uav , so the distance is really very far, i changed the max_depth to 500, but the inference result is blurred? is there any changes i should make or train skills to solve this problem?

The max_depth is only used for evaluation. It is not used in training. You need to make sure that the adjacent frames have sufficient camera motion (not too large, not to small) for training.

thanks for your patient reply,i will adjacent intervals between frames. looking forward to your more great works!

ZhiyiHe1997 avatar Aug 26 '22 07:08 ZhiyiHe1997

well, i am processing images from uav , so the distance is really very far, i changed the max_depth to 500, but the inference result is blurred? is there any changes i should make or train skills to solve this problem?

The max_depth is only used for evaluation. It is not used in training. You need to make sure that the adjacent frames have sufficient camera motion (not too large, not to small) for training.

thanks for your patient reply,i will adjacent intervals between frames. looking forward to your more great works!

Please see our update for training on your own data ("You Own Dataset" section in the ReadMe)

JiawangBian avatar Aug 28 '22 10:08 JiawangBian

Could you please tell me how to train on my own datasets without ground truth?

"python train.py --config my_config --dataset_dir my_dataset"

It tells me to provide "val.txt" and "my_dataset/depth". Isn't the depth optional for validation?

Please see our update for training on your own data ("You Own Dataset" section in the ReadMe)

JiawangBian avatar Aug 28 '22 10:08 JiawangBian