keras
keras copied to clipboard
core dump when giving Conv3DTranspose layer zero-shape input in GPU mode.
Please go to TF Forum for help and support:
https://discuss.tensorflow.org/tag/keras
If you open a GitHub issue, here is our policy:
It must be a bug, a feature request, or a significant problem with the documentation (for small docs fixes please send a PR instead). The form below must be filled out.
Here's why we have that policy:.
Keras developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
System information.
- Have I written custom code (as opposed to using a stock example script provided in Keras): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): 2.8.0, 2.9.0
- Python version: 3.7
- Bazel version (if compiling from source): N/A
- GPU model and memory:
- Exact command to reproduce: please see the colab link below. https://colab.research.google.com/drive/1rI1xf_X9KSteq53bJHhTGDSFDqy2MRT7?usp=sharing
You can collect some of this information using our environment capture script:
https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh
You can obtain the TensorFlow version with: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"
Describe the problem.
Describe the problem clearly here. Be sure to convey here why it's a bug in Keras or why the requested feature is needed.
When giving the Conv3DTranspose layer an input contains zero shape at any dimension, a core dump will happen.
Describe the current behavior. Conv3DTranspose layer will lead to a core dump when the input's dimension contains zero shape. Interestingly, I have tested the specification for Conv3D layer, Conv2D layer and Conv2DTranspose layer, they all work properly (i.e., output meaningful results when the input contains zero shape).
Describe the expected behavior. Conv3DTranspose should not lead to a core dump.
- Do you want to contribute a PR? (yes/no): no
- If yes, please read this page for instructions
- Briefly describe your candidate solution(if contributing):
Standalone code to reproduce the issue. Please see the code snippet below:
import keras
input_shape = [None, 0, 2, 3, 3]
x = keras.layers.Input(input_shape[1:])
layer = keras.layers.Conv3DTranspose(3, 3, strides=(1, 1, 1), padding='same', dtype="double")
y = layer(x)
model = keras.models.Model(x,y)
model.summary()
import numpy as np
input_shape[0] = 10
test_input = np.random.rand(*input_shape)
res = model.predict(test_input)
print(res)
To reproduce the bug, you need to run the code under gpu mode (or you can directly run my colab notebook: https://colab.research.google.com/drive/1rI1xf_X9KSteq53bJHhTGDSFDqy2MRT7?usp=sharing) Source code / logs.
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.
@maybeLee, Could you please confirm where the issue persists when the code is executed in GPU mode only or CPU too? Thank you!
@tilakrayal The issue only persists when the code is executed in GPU mode. I tried on CPU, it works properly.
@gowthamkpr , I was able to reproduce the issue on tensorflow v2.8, v2.9 and nightly. Kindly find the gist of it here.
Hi @maybeLee , What is the use case for having a dimension of zero? If any dimension is zero, the tensor is always empty, which is not useful.
Hi @hertschuh, I trigger this issue by incident when I running my program to construct a model. Indeed, in my program, the empty tensor is not useful, but this issue will directly cause a process core dump.
@VictoriaGriffith ,
Also curious of your use case. What is the use case for having a dimension of zero? If any dimension is zero, the tensor is always empty, which is not useful.
We are investigating the issue, I will report back here once we have an update.