Efficient-Transformer
Efficient-Transformer copied to clipboard
test
Hi, your test code is a multi-GPU test, but I'm using a single GPU for training, how can I change the test code? I ask you to help me.
When I train the DANet model, the results are shown in the figure, and the results do not reach the results in Table 10 of your paper.
When I train the DANet model, the results are shown in the figure, and the results do not reach the results in Table 10 of your paper.
BatchSize is a very important param that affects the accuracy. You may consider setting it according to the advice in the paper
Hi, your test code is a multi-GPU test, but I'm using a single GPU for training, how can I change the test code? I ask you to help me.
Set the testing shell like this, CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2
I used:
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.run --nproc_per_node=1 --master_port 29506 test.py --dataset vaihingen --val_batchsize 16 --models swinT --head mlphead --crop_size 512 512 --save_dir work_dir --base_dir ../../ --information num1
but got an error:
Traceback (most recent call last):
File "/content/gdrive/MyDrive/Efficient-Transformer-main/test.py", line 184, in
potsdam_val = potsdam(base_dir=data_dir, train=False,
File "/content/gdrive/MyDrive/Efficient-Transformer-main/dataset.py", line 33, in init
with open(txt, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '../../data/vaihingen/annotations/test.txt'
[2023-11-01 04:09:45,633] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 21953) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 810, in
main()
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 806, in main
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
test.py FAILED
Can you help me?