Hi , is there anyone who can help solve this problem
Found ckpts []
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
==> image down scale: 1.0
==> image down scale: 1.0
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Validation sanity check: 0it [00:00, ?it/s]/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:69: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers
argument(try 48 which is the number of cpus on this machine) in the
DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
Validation sanity check: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
File "train_mvs_nerf_pl.py", line 322, in
trainer.fit(system)
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self._run(model)
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 756, in _run
self.dispatch()
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 797, in dispatch
self.accelerator.start_training(self)
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 807, in run_stage
return self.run_train()
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 842, in run_train
self.run_sanity_check(self.lightning_module)
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1107, in run_sanity_check
self.run_evaluation()
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 949, in run_evaluation
for batch_idx, batch in enumerate(dataloader):
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hua/mvsnerf/data/dtu.py", line 160, in getitem
img = Image.open(img_filename)
File "/home/hua/anaconda3/envs/mvsnerf/lib/python3.8/site-packages/PIL/Image.py", line 3236, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: './dataset/dtu/Rectified/scan1_train/rect_011_3_r5000.png'
FileNotFoundError: [Errno 2] No such file or directory: 'dtu/Rectified/scan1_train/rect_011_3_r5000.png'
I would like to ask the gods if they have tried this situation.The system I use is ubuntu 18.04 and the IDE is vscode
Hi, I have the same problem. Did you solve it?
Nov 07
'23 09:11
MQYm