squeezeDet icon indicating copy to clipboard operation
squeezeDet copied to clipboard

The memory of the runtime model?

Open chenfsjz opened this issue 8 years ago • 4 comments

hi, when I retrain the squeezeDet net on kitti dataset. I set the batch_size to 1 or 10, and I found that the usage of the GPU are just same with each other. I just confused with the problem.

chenfsjz avatar Jun 09 '17 06:06 chenfsjz

When you say usage do you mean memory usage or gpu computation utilisation? In the first case the reason that you don't see a difference could be because with such small batch sizes the memory space required is minimal compared to the space required for the model parameters. If you use bigger batch sizes like 64 or even 256 you will see a difference in gpu memory usage.

In the case of gpu computation utilisation, the main reason is that the overhead memory operations and general inefficient use of the cores dominates the performance, if you would be to use bigger batch size like 32 or 64 you will probably start to see performance degradation.

Timen avatar Jun 13 '17 11:06 Timen

Thank you for your reply! when the batch sizes are 1 or 10 bigger, the usage of GPU memory is almost 10G, I am not sure dose this statistic data is correct. I just change the src/cfg/kitti_squeezeDet_config.py file.

chenfsjz avatar Jun 14 '17 08:06 chenfsjz

Tensorflow always allocates all your GPU memory by default. There is a tensorflow setting to prevent this from happening if you do not want that.

andreapiso avatar Jul 04 '17 13:07 andreapiso

你好,我在训练的时候根本不占gpu,只占一点点的显存,你知道是什么问题吗?是需要修改代码某个地方吗,十分感谢!!

auvx avatar Oct 19 '18 07:10 auvx