Tomasz Michniewski
Tomasz Michniewski
Hello, Are you going to fix this according to the proposed patch? It would be a really nice enhancement. Best wishes, Tomek
Hello @karllessard, Referring to the first recommendation: are you sure that we could call deviceType on DeviceSpec.newBuilder()? As my IntelliJ complains about this.
Basically I am looking for simplest solution, just for testing the performance.
BTW - I am using version 0.2.0.
I already used hardcoded string "/GPU:0".
Hello @karllessard, Well, I made a tests on Azure Databricks machine NC6s_v3 with graphic card Tesla V100 compute comtability 7.0. With device set to "/CPU:0" the execution time is 0.23s....
BTW - on this cluster in Python we got following times: CPU 1.00s. GPU 0.25s. [PythonHelloTensorFlow_py.txt](https://github.com/tensorflow/java/files/5482443/PythonHelloTensorFlow_py.txt) So in Python GPU is 4 times faster in this exercise. But in Java...
Maybe I should also set the device to the session or graph?
But then it would make no sense to set it on operation level. Do you have any working example of how to perform let say vector addition on GPU?
Firstly, because eager mode is only for development, not production. Secondly - well - for tests of course I might use it, but do you have some working example with...