Keras: ModuleNotFoundError: No module named 'tensorflow.contrib' when I use Tensorflow GPU processing

Created on 25 Nov 2019  路  8Comments  路  Source: keras-team/keras

System information

  • Have I written custom code: No
  • OS Platform and Distribution: Windows 10
  • TensorFlow backend: yes
  • TensorFlow version: 2.0.0
  • Keras version: 2.3.1
  • Python version: 3.7.4
  • CUDA/cuDNN version: 10.1/7.5.0.56
  • GPU model and memory: Nvidia GeForce GTX 1070 8 GB

I am experiencing ModuleNotFoundError: No module named 'tensorflow.contrib' while executing from tensorflow.contrib.cudnn_rnn.python.ops import cudnn_rnn_ops command in the keras\layers\cudnn_recurrent.py, line 425. This issue is specific to Tensorflow when using GPU processing. No issues at all if I do not use GPU processing.

Code to reproduce the issue
The only line changed in the code in order to benefit from GPU processing is importing keras.layers.CuDNNLSTM instead of keras.layers.LSTM. The CPU-version of the same code works as it is expected.

Stacktrace

Using TensorFlow backend.
2019-11-25 02:44:47.169716: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2019-11-25 02:44:54.145607: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-11-25 02:44:54.165862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:01:00.0
2019-11-25 02:44:54.166083: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-11-25 02:44:54.166651: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-11-25 02:44:54.167105: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-11-25 02:44:54.169843: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:01:00.0
2019-11-25 02:44:54.170061: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-11-25 02:44:54.170602: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-11-25 02:44:54.804769: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-25 02:44:54.804977: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2019-11-25 02:44:54.805105: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2019-11-25 02:44:54.805957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6372 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
  File "C:\Users\talha\AppData\Local\Programs\Python\Python37\Lib\contextlib.py", line 130, in __exit__
    self.gen.throw(type, value, traceback)
  File "D:\.virtualenvs\signal-proc-keras-gpu-8KfKslfY\lib\site-packages\tensorflow_core\python\framework\func_graph.py", line 404, in inner_cm
    yield g
  File "D:\.virtualenvs\signal-proc-keras-gpu-8KfKslfY\lib\site-packages\keras\backend\tensorflow_backend.py", line 75, in symbolic_fn_wrapper
    return func(*args, **kwargs)
  File "D:\.virtualenvs\signal-proc-keras-gpu-8KfKslfY\lib\site-packages\keras\engine\base_layer.py", line 463, in __call__
    self.build(unpack_singleton(input_shapes))
  File "D:\.virtualenvs\signal-proc-keras-gpu-8KfKslfY\lib\site-packages\keras\layers\cudnn_recurrent.py", line 425, in build
    from tensorflow.contrib.cudnn_rnn.python.ops import cudnn_rnn_ops
ModuleNotFoundError: No module named 'tensorflow.contrib'
tensorflow support

Most helpful comment

Had the same Issue. I could solve it (for now) by using
tensorflow.compat.v1.keras.layers.CuDNNLSTM instead of keras.layers.CuDNNLSTM.

All 8 comments

Same issue for me. Here is a reproducing code:

import keras as ks
inp = ks.layers.Input((10, 1))
x = ks.layers.CuDNNLSTM(1)(inp)

which throws the above error.

Had the same Issue. I could solve it (for now) by using
tensorflow.compat.v1.keras.layers.CuDNNLSTM instead of keras.layers.CuDNNLSTM.

Hi,

keras.layers.CuDNNLSTM did not work out at all for me using keras 2.3.1. I did not find any way to make it work. Also the proposed solution by switching to tensorflow.compat.v1.keras.layers.CuDNNLSTM neither worked out for me.

I found the following way to solve this problem and to have my data trained with GPU-acceleration (Windows 10, CUDA 10.1):

  1. Use at least tensorflow 2.1.0.
  2. Change your imports from from keras import something to from tensorflow.keras import something. This seems to be the preffered way as stated in the 2.3.0 release notes [0].
  3. As it appears, the CuDNNLSTM has been merged into LSTM in Tensorflow 2.1.0. According to the documentation [1] the GPU-accelerated version is executed automatically. Quoting from [1]:
Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation.

The requirements to use the cuDNN implementation are:

    activation == tanh
    recurrent_activation == sigmoid
    recurrent_dropout == 0
    unroll is False
    use_bias is True
    Inputs are not masked or strictly right padded.

By doing so, I can successfully train the LSTM on my GPU, as the CuDNNLSTM is automatically executed through LSTM().

Good luck to you!

[0] https://github.com/keras-team/keras/releases/tag/2.3.0

[1] https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM

@jollyjonson's solution worked for me on keras 2.3.1 and tensorflow 2.1.0. Anyway, @jliebers's solution also works in my code and seems to be the way it should be done in keras 2.3. Thanks to both of you!

so, is there a solution how to use CuDNNLSTM?

so, is there a solution how to use CuDNNLSTM?

As stated by @jliebers:

Hi,

keras.layers.CuDNNLSTM did not work out at all for me using keras 2.3.1. I did not find any way to make it work. Also the proposed solution by switching to tensorflow.compat.v1.keras.layers.CuDNNLSTM neither worked out for me.

I found the following way to solve this problem and to have my data trained with GPU-acceleration (Windows 10, CUDA 10.1):

  1. Use at least tensorflow 2.1.0.
  2. Change your imports from from keras import something to from tensorflow.keras import something. This seems to be the preffered way as stated in the 2.3.0 release notes [0].
  3. As it appears, the CuDNNLSTM has been merged into LSTM in Tensorflow 2.1.0. According to the documentation [1] the GPU-accelerated version is executed automatically. Quoting from [1]:
Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation.

The requirements to use the cuDNN implementation are:

    activation == tanh
    recurrent_activation == sigmoid
    recurrent_dropout == 0
    unroll is False
    use_bias is True
    Inputs are not masked or strictly right padded.

By doing so, I can successfully train the LSTM on my GPU, as the CuDNNLSTM is automatically executed through LSTM().

Good luck to you!

[0] https://github.com/keras-team/keras/releases/tag/2.3.0

[1] https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM

Just use LSTM. If you tried this and found no GPU usage in your machine, check that you have tensorflow-gpu installed instead of tensorflow.

Hey,

so in the meantime I migrated all my code to use tf.keras (2.1) and not Keras (3.2.1) anymore, and I can confirm, that the GPU-accelerated LSTMs are executed correctly when sticking to the parameters I quoted from the documentation above.

But please be aware that there are massive issues when executing them on a Windows operating system (see my other issues in tensorflow). They work best on Linux by far.

Cheers,

Jonathan

I had the same problem using Keras 2.3.1.
When I switched, like it was written here, to:
from tensorflow.keras import ...
instead of:
from keras import ...
it worked using the GRU or the LSTM layers.
I used tensorflow-gpu 2.2.0

Update
When using GRU layer without dropout everything works. When using with dropout like so:

input_tensor = layers.Input((None, float_data.shape[-1]))
kmodel = layers.GRU(32, dropout=0.2, recurrent_dropout=0.2)(input_tensor)
output_tensor = layers.Dense(1)(kmodel)
model = models.Model(input_tensor, output_tensor)

I receive the following warning:
WARNING:tensorflow:Layer gru_4 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
Which results in very slow performance!
Anyone knows why?

Was this page helpful?
0 / 5 - 0 ratings