hanban

Results 36 comments of hanban

Almost same as single GPU with MirroredStrategy. ![Screen Shot 2020-11-17 at 1 48 08 PM](https://user-images.githubusercontent.com/61307585/99351543-c4d25680-28db-11eb-863c-90cb1de685e9.png)

I encounter another memory leak issue without distribute strategy. ``` import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "1" import tensorflow as tf import tensorflow_probability as tfp tfb = tfp.bijectors tfd = tfp.distributions...

I tried reinstalled tfnightly and it still didn't work. Any harmful ops I used in above code? It occupied 16G memory when it finished building graph and growing 0.05G/sec...

@nikitamaia Hi, my issue does not leak on GPU, it leak on physical memory. And the above tfp code I provided is kinda different from this case but they lead...

I noticed that as the model (not only the one in this issue) gets more complex, increase the gpus will increase the compile time ( I guess?), which means more...

Hi, any progress now? ❤️