Luke Miner

Results 74 comments of Luke Miner

I have this problem using the anaconda cudatoolkit. I ended up using nvidia-docker instead for my cuda/cudnn installation and now it works.

"O2" is stable for me where "O1" and native amp give me NaNs. It would be really nice if there were some way to duplicate 02 behavior using native torch.cuda.amp....

Anyone have a wheel they'd care to spare?

`tensorflow_io_gcs_filesystem` works for me given the instructions above in both python 3.8 and 3.9. I can't get bazel to build `tensorflow-io` though on the m1, which I suppose shouldn't be...

@rmccorm4 just updated. I had a look at that issue, but that appears to be more CUDA shared memory, would it apply to system memory as well?

I haven't been able to catch the exception. Looks like it's erroring out during the call. However, when I try CUDA memory instead, it works fine.

The client and server are running on the same machine and in the same container.

Tensorflow is 1.12 and Keras is 2.2.4. I'm using the dataset API so maybe that's the problem.

@FB-wh No I haven't. Waiting for the next version of tensorflow.