notebook
notebook copied to clipboard
Different GPU detection between python command line and Jupyter Notebook Cell
Describe the bug Tensorflow GPU support existing in command line python is not available in Jupyter notebooks started from same conda environment.
To Reproduce Follow the steps to install Tensorflow GPU support on Windows WSL (using UBUNTU 22.04.2 LTS distro) https://www.tensorflow.org/install/pip#windows-wsl2
The verification command python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" returns [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Install jupyter in the miniconda environment defined in the tensorflow install page.
Launch jupyter notebook.exe from the same command line environment Same Python code from a notebook cell import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))
return and empty list []
Expected behavior Notebook cell should display same result as command line [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] Screenshots
Desktop (please complete the following information):
- OS: WSL2 under Windows 10 latest update using Ubuntu 22.04.2 LTS distro
- Browser Firefox
- Version Firefox 113.0.2 (64 bits)
Additional context the jupyter command help under Ubuntu states to use jupyter notebook.exe (not jupyter notebook). same behavior after installing nb_conda.
Hi @fiammante thank you for submitting this issue! Can you reproduce this issue in IPython?
It works well with IPython, works also well with docker Tensorflow image with Jupyter. Problem only exists when lauching jupyter from WSL Ubunto .
IPython capture below
(tf) fiammante@DESKTOP-3AQ037Q:~$ ipython Python 3.9.16 (main, Mar 8 2023, 14:00:05) Type 'copyright', 'credits' or 'license' for more information IPython 8.12.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import tensorflow as tf; print(tf.config.list_physical_devices('GPU')) 2023-05-31 00:40:01.469356: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-05-31 00:40:03.416088: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-05-31 00:40:06.188443: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:982] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node Your kernel may have been built without NUMA support. 2023-05-31 00:40:06.721602: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:982] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node Your kernel may have been built without NUMA support. 2023-05-31 00:40:06.721733: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:982] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node Your kernel may have been built without NUMA support. [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Same problem here on linux
@JasonWeill I would like to work on this issue
@vaibhavnohria1 Thanks for your interest! I've assigned you to this issue; please submit a pull request when your change is ready for review.