HiVT
HiVT copied to clipboard
GPU Memory-Usage goes higher during Trainning.
Hi Dr.Zhou, Firstly, Thank you very much for your excellent work!!!.
I run trainning on my Nvidia-A4000 GPU,GPU Memory-Usage goes higher every epoch.
Some hparams are changed:
the train_batch_size and the val_batch_size set as 64. parallel :True,num_workers:6,pin_memory: false
Epoch 0, the GPU Memory-Usage takes up 11000Mib, but Epoch 30, it takes up 15577Mib.
Could Anyone help me deal with this issue.
the hparams.yaml shown as below: historical_steps: 20 future_steps: 30 num_modes: 6 rotate: true node_dim: 2 edge_dim: 2 embed_dim: 64 num_heads: 8 dropout: 0.1 num_temporal_layers: 4 num_global_layers: 3 local_radius: 50 parallel: true lr: 0.0005 weight_decay: 0.0001 T_max: 64 root: /home/com0179/AI/Prediction/HiVT/datasets train_batch_size: 64 val_batch_size: 64 shuffle: true num_workers: 6 pin_memory: false persistent_workers: true gpus: 1 max_epochs: 64 monitor: val_minFDE save_top_k: 5