TensorFlow.NET icon indicating copy to clipboard operation
TensorFlow.NET copied to clipboard

[Question]: Adapt GPU memory limit

Open FrancescoRusticali opened this issue 1 year ago • 4 comments

Description

Hi all, is there any way to set a specific memory limit to GPU memory usage (different from the tensorflow default)?

I'm looking for something similar to this: https://www.tensorflow.org/api_docs/python/tf/config/set_logical_device_configuration

Alternatives

No response

FrancescoRusticali avatar Sep 21 '23 12:09 FrancescoRusticali

To add more comments: setting AllowGrowth to true, or changing the value of PerProssesGpuMemoryFraction on GPUOptions seems not to help, nor it helps using method tf.config.set_memory_growth(). All of these work fine in Python. How is it possible to manage GPU memory usage in Tensorflow.NET?

FrancescoRusticali avatar Sep 22 '23 12:09 FrancescoRusticali

Check if this will help: https://github.com/SciSharp/TensorFlow.NET/blob/3811e4e14018ae6b606a3bc9a39776fbe1870ecb/tools/TensorFlowNET.Benchmarks/Leak/GpuLeakByCNN.cs#L20

Oceania2018 avatar Sep 24 '23 01:09 Oceania2018

Thank you for the suggestion. I'm afraid it's not helping either. Whatever I do, the memory limit is kept the same. My problem is that I always see the same memory occupation (around 75% of total GPU dedicated memory), and there seems to be no way to increase it if needed, or reduce it if more processes need to run in parallel. I also tried to run the exact same code from the GpuLeakByCNN example above, but I get same behaviour.

FrancescoRusticali avatar Sep 25 '23 14:09 FrancescoRusticali

Hi, I tried to investigate further. Even calling directly c_api.TFE_ContextOptionsSetConfig does not change the situation. I even tried to pass directly the serialized config, following for example this. There's probably something that I'm not understanding. How should these config options be applied?

FrancescoRusticali avatar Sep 28 '23 12:09 FrancescoRusticali