sc_depth_pl
sc_depth_pl copied to clipboard
How to train on my own datasets without ground truth?
Could you please tell me how to train on my own datasets without ground truth?
"python train.py --config my_config --dataset_dir my_dataset"
It tells me to provide "val.txt" and "my_dataset/depth". Isn't the depth optional for validation?
- "val.txt" is required, which indicates the validation sequences.
- use "--val_mode photo" if you don't have gt depths for validation
- "val.txt" is required, which indicates the validation sequences.
- use "--val_mode photo" if you don't have gt depths for validation
"python train.py --config my_config --dataset_dir my_dataset --val_mode photo"
and then it displays :
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
3964 samples found for training
3964 samples found for validatioin
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [6]
Validation sanity check: 0%| | 0/5 [00:00<?, ?it/s]Traceback (most recent call last):
File "train.py", line 53, in
Could you please tell me how to train on my own datasets without ground truth?
"python train.py --config my_config --dataset_dir my_dataset"
It tells me to provide "val.txt" and "my_dataset/depth". Isn't the depth optional for validation?
i got the same problem, have solved it?
I find that photometric loss is not good enough for validation. If you do not have groundtruth, you may not do validation, and simply save the last model.
thanks for your quickly reply! actually, i calculate the depth map from COLMAP,due to it’s accurancy is not so good, i do not want to use it for validation
thanks for your quickly reply! actually, i calculate the depth map from COLMAP,due to it’s accurancy is not so good, i do not want to use it for validation
Yes, COLMAP depth is not good enough. I will solve these issues recently.
I find that photometric loss is not good enough for validation. If you do not have groundtruth, you may not do validation, and simply save the last model.
there is another question i want to ask you , what the meanings of max_depth=200, and min_depth in loss_functions.py, are they the true distance?
I find that photometric loss is not good enough for validation. If you do not have groundtruth, you may not do validation, and simply save the last model.
there is another question i want to ask you , what the meanings of max_depth=200, and min_depth in loss_functions.py, are they the true distance?
That is for evaluation in DDAD datasets. It is true distance, like 80m max distance for KITTI.
well, i am processing images from uav , so the distance is really very far, i changed the max_depth to 500, but the inference result is blurred? is there any changes i should make or train skills to solve this problem?
well, i am processing images from uav , so the distance is really very far, i changed the max_depth to 500, but the inference result is blurred? is there any changes i should make or train skills to solve this problem?
The max_depth is only used for evaluation. It is not used in training. You need to make sure that the adjacent frames have sufficient camera motion (not too large, not to small) for training.
well, i am processing images from uav , so the distance is really very far, i changed the max_depth to 500, but the inference result is blurred? is there any changes i should make or train skills to solve this problem?
The max_depth is only used for evaluation. It is not used in training. You need to make sure that the adjacent frames have sufficient camera motion (not too large, not to small) for training.
thanks for your patient reply,i will adjacent intervals between frames. looking forward to your more great works!
well, i am processing images from uav , so the distance is really very far, i changed the max_depth to 500, but the inference result is blurred? is there any changes i should make or train skills to solve this problem?
The max_depth is only used for evaluation. It is not used in training. You need to make sure that the adjacent frames have sufficient camera motion (not too large, not to small) for training.
thanks for your patient reply,i will adjacent intervals between frames. looking forward to your more great works!
Please see our update for training on your own data ("You Own Dataset" section in the ReadMe)
Could you please tell me how to train on my own datasets without ground truth?
"python train.py --config my_config --dataset_dir my_dataset"
It tells me to provide "val.txt" and "my_dataset/depth". Isn't the depth optional for validation?
Please see our update for training on your own data ("You Own Dataset" section in the ReadMe)