DeepSpeech icon indicating copy to clipboard operation
DeepSpeech copied to clipboard

Anyone know how to set per_process_gpu_memory_fraction ?

Open austinksmith opened this issue 9 months ago • 1 comments

I want to set the following configuration option for tensorflow, I forked this repo and I can see that gpu options are being set on transcribe.py this is fine however... how do I compile from source if I modify these settings? I want to be able to run 2 processes concurrently , i have enough vram if i allocate each one to use 8gb of vram max.



# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))


I was able to find this discourse forum, https://discourse.mozilla.org/t/how-to-restrict-transcribe-py-from-consuming-whole-gpu-memory/75555/5

this gives the code i need, but how do I compile from source once I change the code?

austinksmith avatar May 16 '24 18:05 austinksmith