HAT
HAT copied to clipboard
Training time on GPUs
Hi guys, thank you for sharing this great model. I have only one issue - training time. I have 8 GPUs, each more than 20Gb, and I tried to train the model on 15,000 examples (size 200 x 200) with batch size 1, but it took 1h! for me. Is it ok? Or do I have some issues? PyTorch > 2.0. Model = HATx2
@aleksmirosh The input size is too large. 48x48 or 64x64 is commonly used.
@aleksmirosh The input size is too large. 48x48 or 64x64 is commonly used.
thank you for responding!