Keras: Unable to get reproducible results using Keras with TF backend on GPU

Created on 8 May 2019  路  5Comments  路  Source: keras-team/keras

I followed all the steps mentioned in this link (https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development) but I didn't get the same results every time.
import numpy as np
import tensorflow as tf
import random as rn

seed_value=0
import os
os.environ['PYTHONHASHSEED']=str(seed_value)

np.random.seed(seed_value)

rn.seed(seed_value)

session_conf = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)

from keras import backend as K

tf.set_random_seed(seed_value)

sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)

import torch
torch.manual_seed(seed_value)

What more I have to do to get the reproducible results?

I'm running the code on Google Colab GPU.

tensorflow awaiting response support

Most helpful comment

Following up on my previous comment. For an up-to-date status on TensorFlow deterministic operation on GPUs (and solutions), please see https://github.com/NVIDIA/tensorflow-determinism.

All 5 comments

Hi, I'm working on making TensorFlow operate deterministically on GPUs. We currently have a solution, part of which will be in the next TensorFlow release (v1.14 or v2.0). There is an additional piece to the solution which has not yet been upstreamed to the main TensorFlow release, but will likely be released in the next NVIDIA NGC TensorFlow container (v19.06).

Take a look at this video of a recent talk I gave about this at GTC, and also note all the information and links in the comments: http://bit.ly/determinism-in-deep-learning

@kalyanks0611 Is this resolved? thanks!

Closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!

This is fixed when using NGC TF container version 19.06+

Determinism - Setting the environment variable TF_CUDNN_DETERMINISM=1 forces the selection of deterministic cuDNN convolution and max-pooling algorithms. When this is enabled, the algorithm selection procedure itself is also deterministic.

Alternatively, setting TF_DETERMINISTIC_OPS=1 has the same effect and additionally makes any bias addition that is based on tf.nn.bias_add() (for example, in Keras layers) operate deterministically on GPU. If you set TF_DETERMINISTIC_OPS=1 then there is no need to also set TF_CUDNN_DETERMINISM=1.

TF_CUDNN_DETERMINISM is also implemented in upstream TF 1.14.0, but this is unfortunately not mentioned in the release notes (I'm working on that).

I am also working on getting TF_DETERMINISTIC_OPS into upstream TensorFlow.

Following up on my previous comment. For an up-to-date status on TensorFlow deterministic operation on GPUs (and solutions), please see https://github.com/NVIDIA/tensorflow-determinism.

Was this page helpful?
0 / 5 - 0 ratings