gpu out of memory
HI: I only have v100 16G not 32G. How to testing face?and How to change xxx.json for my 16G gpu? thank you very mush
If you want to use 16G GPU memory V100, there are also multiple tricks to reduce memory usage. For example, you can consider reducing stylegan model size to 256* 256 or you can use fewer stylegan features and reduce DatasetGAN input feature size as indicated in https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L6. or reduce ensemble model number as indicated https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L15.
However we didn't test that.
If you want to use 16G GPU memory V100, there are also multiple tricks to reduce memory usage. For example, you can consider reducing stylegan model size to 256* 256 or you can use fewer stylegan features and reduce DatasetGAN input feature size as indicated in https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L6. or reduce ensemble model number as indicated https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L15.
However we didn't test that.
So would this imply that the existing checkpoints are unusable? Also can multiple GPUs be used to not encounter this problem?
Regardless of the fact that this is mostly research driven, one ought not expect people to have a 12K GPU lying around ;)
I have changed dim=64 batch_size=1, model_num=1,and run 'python run_app.py'.but still out of memory ,and tried to allocate 5.88GB has not changed. shoud I retrain the model? Thanks
You can use pytorch half precision inference, which saves lots of GPU memory
You can use pytorch half precision inference, which saves lots of GPU memory
Could you tell me where you changed in the scripts?