Keras: GPU memory fraction does not work in keras 2.0.9 but it works in 2.0.8

Created on 10 Nov 2017  路  5Comments  路  Source: keras-team/keras

The following code for using only part of the GPU works on Keras 2.0.8 but not on 2.0.9:

import tensorflow as tf
import keras.backend.tensorflow_backend as KTF

def get_session(gpu_fraction=0.3):
    """Assume that you have 6GB of GPU memory and want to allocate ~2GB"""
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_fraction)
    return tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))


KTF.set_session(get_session())

// your keras code ...

or

from keras import backend as K
import tensorflow as tf

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.3
session = tf.Session(config=config)
K.set_session(session)

// your keras code ...

Here is my environment:

dependencies:
- backports=1.0=py35_0
- backports.weakref=1.0rc1=py35_0
- bleach=1.5.0=py35_0
- certifi=2016.2.28=py35_0
- cudatoolkit=8.0=3
- cudnn=6.0.21=cuda8.0_0
- html5lib=0.9999999=py35_0
- libgcc=5.2.0=0
- libprotobuf=3.4.0=0
- markdown=2.6.9=py35_0
- mkl=2017.0.3=0
- numpy=1.13.1=py35_0
- openssl=1.0.2l=0
- pip=9.0.1=py35_1
- protobuf=3.4.0=py35_0
- python=3.5.4=0
- readline=6.2=2
- setuptools=36.4.0=py35_1
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- tensorflow-gpu=1.3.0=0
- tensorflow-gpu-base=1.3.0=py35cuda8.0cudnn6.0_1
- tensorflow-tensorboard=0.1.5=py35_0
- tk=8.5.18=0
- werkzeug=0.12.2=py35_0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- pip:
  - h5py==2.7.1
  - keras==2.0.8 # if this is 2.0.9 then gpu fraction will not work
  - pyyaml==3.12
  - scipy==1.0.0
  - tensorflow==1.3.0

Most helpful comment

I believe this has been fixed at master. Try to install the Github version.

All 5 comments

What do you mean, "it doesn't work"? You're not providing much info.

Same problem here. Here is what I found.

  1. Keras is ignoring os.environ["CUDA_VISIBLE_DEVICES"] and pre-allocate all available GPU memory on all GPUs.
  2. Even with tf_config.gpu_options.allow_growth = True, it still pre-allocates all memory even without anything created or loaded.

2.0.8 was fine. Only happening in 2.0.9

I believe this has been fixed at master. Try to install the Github version.

Yes. Github master version fixed the problem. Thank you so much!

The fix will be in 2.1.0 (to be released very soon).

Was this page helpful?
0 / 5 - 0 ratings