hanban
hanban
Almost same as single GPU with MirroredStrategy. 
I encounter another memory leak issue without distribute strategy. ``` import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "1" import tensorflow as tf import tensorflow_probability as tfp tfb = tfp.bijectors tfd = tfp.distributions...
I tried reinstalled tfnightly and it still didn't work. Any harmful ops I used in above code? It occupied 16G memory when it finished building graph and growing 0.05G/sec...
@nikitamaia Hi, my issue does not leak on GPU, it leak on physical memory. And the above tfp code I provided is kinda different from this case but they lead...
I noticed that as the model (not only the one in this issue) gets more complex, increase the gpus will increase the compile time ( I guess?), which means more...
Hi, any progress now? ❤️