LoFTR
LoFTR copied to clipboard
Data "D2-Net preprocessed images" is unavailable
404 is reported.
i have same issue
how did you solve the issue? @trand2k @WallofWonder
still waiting for a solution
they actually updated the readme, in the FAQ you can read that they recommend to leave the d2net dataset away.
they actually updated the readme, in the FAQ you can read that they recommend to leave the d2net dataset away.
Do you know how to train this with only Megadepth? Like how we build the symlink then?
I think you can just leave the D2net part away and symlink only the megadepth dataset. But i havent tried training it yet. Btw, which cloud gpu service will you use to train it?
Thank you for your answering. I just linked the magadepth and it still got some error. For the gpus, I followed the guidance by authors(thay said 4 gpus can run) with 4 gpus of A5000, 24GB RAM.
what error do you get? I started training some minutes ago, but didn't get an error yet.
what error do you get? I started training some minutes ago, but didn't get an error yet.
Hi, thanks so much for replying. Did you train by Scanner or Megadepth? How did you set the training process? I used megadepth but the undistorted data from D2Net is unavailable, I downloaded the original Megadepth SfM datasets(600+GB) to replace that dataset.
My error shows: File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data\dataloader.py", line 1225, in _process_data data.reraise() File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch_utils.py", line 429, in reraise raise self.exc_type(msg) AttributeError: Caught AttributeError in DataLoader worker process 0. Original Traceback (most recent call last): File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data_utils\worker.py", line 202, in _worker_loop data = fetcher.fetch(index) File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data\dataset.py", line 219, in getitem return self.datasets[dataset_idx][sample_idx] File "D:\LoFTR-master\src\datasets\megadepth.py", line 75, in getitem image0, mask0, scale0 = read_megadepth_gray( File "D:\LoFTR-master\src\utils\dataset.py", line 109, in read_megadepth_gray w, h = image.shape[1], image.shape[0] AttributeError: 'NoneType' object has no attribute 'shape'
My whole process is like this:
- Environment settings (exactly like guidance)
- Download Megadepth v1(200GB) and Megadepth SfM (600GB)
- Build symlinks:(I rename the megadeath SfMdataset to “Undistorted_SfM”) ln -sv /path/to/megadepth/phoenix /path/to/megadepth_d2net/Undistorted_SfM /path/to/LoFTR/data/megadepth/train ln -sv /path/to/megadepth/phoenix /path/to/megadepth_d2net/Undistorted_SfM /path/to/LoFTR/data/megadepth/test ln -s /path/to/megadepth_indices/* /path/to/LoFTR/data/megadepth/index
- Run bash scripts/reproduce_train/outdoor_ds.sh
Have you done any other process besides this?
Btw, May I ask what gpus did you use? I use 4 A5000 with 24GB memories each. Could the error caused by the equipments?
PS: Could you please add my WhatsApp (+65 93512175) for further contact? We could discuss some more details.
I am really appreciate your kind reply. (Got no one in lab research same topic😭)
https://github.com/zju3dv/LoFTR/issues/276#issuecomment-1600921374
@WallofWonder @trand2k