GeoTransformer
GeoTransformer copied to clipboard
Is the batch size works?
Hi, thanks for sharing your great job. I tested the experiment with 3DMatch dataset from PREDATOR, however, I am trying to test it but I have faced a problem related to the GPU memory.
RuntimeError: CUDA out of memory. Tried to allocate 404.00 MiB (GPU 0; 5.81 GiB total capacity; 4.00 GiB already allocated; 236.38 MiB free; 4.45 GiB reserved in total by PyTorch)
I changed in config.py
the batch size from 1 to 2 (or other any number bigger than 1), but now, another error is rising...
Traceback (most recent call last):
File "trainval.py", line 62, in <module>
main()
File "trainval.py", line 58, in main
trainer.run()
File "/home/mcit/GeoTransformer/geotransformer/engine/epoch_based_trainer.py", line 180, in run
self.train_epoch()
File "/home/mcit/GeoTransformer/geotransformer/engine/epoch_based_trainer.py", line 95, in train_epoch
output_dict, result_dict = self.train_step(self.epoch, self.inner_iteration, data_dict)
File "trainval.py", line 41, in train_step
output_dict = self.model(data_dict)
File "/home/mcit/anaconda3/envs/geotransformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mcit/GeoTransformer/experiments/geotransformer.3dmatch.stage4.gse.k3.max.oacl.stage2.sinkhorn/model.py", line 75, in forward
transform = data_dict['transform'].detach()
AttributeError: 'list' object has no attribute 'detach'
With a quick search in the issue history, you reported the solution only works for a batch equal 1. Did you evolve the code to accept a size bigger than 1? Or could you help me with the issue I found after changing the batch size?
Thanks in advance for your time and work.
No, we only support the batch size of 1. Please use DDP to increase the batch size as described in READMD.