Keras: Keras-Tensorflow does not produce reproducible results on GPU

Created on 20 Sep 2017  Â·  12Comments  Â·  Source: keras-team/keras

Hello folks,

My aim to ask this question is that, there is no unified answer for getting producible results with Keras-Tensorflow on GPU. So, If you'd like to answer, that'd be better to all of us.

I tried the following:

1) Import numpy. Seed it. Then import all other libraries

2) https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development

What is the correct way of doing this?

Thanks in advance

tensorflow

Most helpful comment

If you are using GPU and cudnn there are chances that this will not work. Some of cudnn algorithms implementations are non deterministic.

All 12 comments

What is the problem you encounter? Did you seed random and tensorflow as well?
Could you share a little code that shows how to reproduce your issue?

I used 2nd option on GPU yesterday but it didn'r work again

I cannot share the full code but I shared my training script as follows:

https://gist.github.com/emirceyani/3a07b80d53158564e1f7c90ea4922785

preprocessing is in pure Python and there are custom blocks too

Did you try to set the seed on the DropOut Layer to 0?

Hello, I tried the 2nd solution again and it worked on CPU. I haven't tried
it on GPU yet.

2017-09-21 17:04 GMT+03:00 HerbertD notifications@github.com:

Did you try to set the seed on the DropOut Layer to 0?

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/7937#issuecomment-331166099,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJUZpAPXt9dowAk91pTW87drW2if7iJNks5skmzmgaJpZM4PdZfK
.

If you are using GPU and cudnn there are chances that this will not work. Some of cudnn algorithms implementations are non deterministic.

So just to clarify, if I am running keras, tensorflow and python3 on a GPU, there is currently no way to reproduce the same result?

Hello, I meet the save question,and I also tried the method recommended by https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development
unfortunately,it dont work. how to reproduce the same result?

I too am looking for an answer to this question. Is there currently no way to reproduce the results when trained on GPU using Keras with tensorflow back-end?

I tried on several ways and for now the only time I succeeded in having reproducible results is when I forced the script to run in CPU and explicitly selected the same CPU device using with tf.device('/cpu:0'): . It seems that as long as you use GPU for training you will not have the same results. I saw in other issue threads that someone mentioned he made it by updating tensorflow to 1.5.0. But I couldn't try it.

Please use the latest version of TensorFlow and test again. Feel free to reopen the issue if it still persists.
Thanks!

The results are still not reproducible with tensorflow-gpu==2.2.
They are reproducible on CPU though.

I could get reproducible results using the TensorFlow Determinism package.

Was this page helpful?
0 / 5 - 0 ratings