keras icon indicating copy to clipboard operation
keras copied to clipboard

Conv2D with XLA `jit_compile=True` fails to run

Open Co1lin opened this issue 3 years ago • 1 comments
trafficstars

Info

❯ python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, '\n', tf.version.VERSION)"
v1.12.1-81340-gebbacda77a9
 2.11.0-dev20220914

❯ pip list | grep tf
tf-estimator-nightly         2.11.0.dev2022082808
tf-nightly                   2.11.0.dev20220914

Code

The following code works well without jit_compile=True. However, if we enable XLA compilation by adding jit_compile=True, it will throw an error. Reproduced in CoLab notebook here.

import tensorflow as tf
from keras import layers

class MyModule(tf.Module):
    def __init__(self):
        super().__init__()
        self.conv = layers.Conv2D(2, 1, padding='valid', dtype=tf.float64, autocast=False)

    @tf.function(jit_compile=True) # without jit_compile=True works fine
    def __call__(self, i0):
        o0 = tf.floor(i0)
        o1 = self.conv(o0)
        o2 = tf.add(o1, o0)
        return o2

def simple():
    inp = {
        "i0": tf.constant(
            3.14, shape=[1,1,3,2], dtype=tf.float64
        ),
    }
    m = MyModule()

    out = m(**inp) # Error!

    print(out)
    print(out.shape)

if __name__ == "__main__":
    simple()

Log

2022-09-18 01:33:53.096156: I tensorflow/compiler/xla/service/service.cc:173] XLA service 0x55a9ace73180 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2022-09-18 01:33:53.096176: I tensorflow/compiler/xla/service/service.cc:181]   StreamExecutor device (0): NVIDIA GeForce RTX 3080 Ti, Compute Capability 8.6
2022-09-18 01:33:53.098645: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2022-09-18 01:33:53.537659: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:428] Loaded cuDNN version 8100
2022-09-18 01:33:54.161249: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:5341] Disabling cuDNN frontend for the following convolution:
  input: {count: 1 feature_map_count: 2 spatial: 1 3  value_min: 0.000000 value_max: 0.000000 layout: BatchDepthYX}
  filter: {output_feature_map_count: 2 input_feature_map_count: 2 layout: OutputInputYX shape: 1 1 }
  {zero_padding: 0 0  pad_alignment: default filter_strides: 1 1  dilation_rates: 1 1 }
  ... because it uses an identity activation.
2022-09-18 01:33:54.749772: I tensorflow/compiler/jit/xla_compilation_cache.cc:476] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.
Traceback (most recent call last):
  File "/home/colin/code/test_proj/scripts/tflite2.py", line 41, in simple
    out = m(**inp)
  File "/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.UnknownError: CUDNN_STATUS_NOT_SUPPORTED
in tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5151): 'status' [Op:__inference___call___46]

Co1lin avatar Sep 18 '22 06:09 Co1lin

@Co1lin, Thanks for opening this issue. This issue is not related to Keras. Could you please post this issue on Tensorflow/Tensorflow repo. Thank you!

tilakrayal avatar Sep 19 '22 11:09 tilakrayal

This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.

google-ml-butler[bot] avatar Sep 26 '22 11:09 google-ml-butler[bot]

Are you satisfied with the resolution of your issue? Yes No

google-ml-butler[bot] avatar Sep 26 '22 13:09 google-ml-butler[bot]