squeezeDet
squeezeDet copied to clipboard
The memory of the runtime model?
hi, when I retrain the squeezeDet net on kitti dataset. I set the batch_size to 1 or 10, and I found that the usage of the GPU are just same with each other. I just confused with the problem.
When you say usage do you mean memory usage or gpu computation utilisation? In the first case the reason that you don't see a difference could be because with such small batch sizes the memory space required is minimal compared to the space required for the model parameters. If you use bigger batch sizes like 64 or even 256 you will see a difference in gpu memory usage.
In the case of gpu computation utilisation, the main reason is that the overhead memory operations and general inefficient use of the cores dominates the performance, if you would be to use bigger batch size like 32 or 64 you will probably start to see performance degradation.
Thank you for your reply! when the batch sizes are 1 or 10 bigger, the usage of GPU memory is almost 10G, I am not sure dose this statistic data is correct. I just change the src/cfg/kitti_squeezeDet_config.py file.
Tensorflow always allocates all your GPU memory by default. There is a tensorflow setting to prevent this from happening if you do not want that.
你好,我在训练的时候根本不占gpu,只占一点点的显存,你知道是什么问题吗?是需要修改代码某个地方吗,十分感谢!!