Keras: Validation loss nan but regular loss normal

Created on 27 Dec 2015  路  2Comments  路  Source: keras-team/keras

I have a graph model that essentially is a LSTM -> Dense where the Dense's activation is a softmax (with 3 classes [0, 1, 0]). I have trained the model with a categorical_crossentropy, squared_hinge and a hinge (all yielding the same problem - val. loss = nan but loss = normal (0.2139 or what have you) ). I also trained the model with Adam and RMSprop (both still have the same problem). When passing a dictionary to the model (for fit or train), I use np.nan_to_num(...) to make sure every number is not nan (although the possibility of 0's still exists). When I call .fit(...), my loss is normal but my validation loss is almost always nan. What is the issue? This hasn't happened with previous versions of Keras or Theano. Here is a screenshot of my loss vs. validation loss. http://i.imgur.com/RgdqK4T.png (When calling .fit(...), verbose = 1 and validation_split = 0.2).

Most helpful comment

I removed Dropout and it fixed it.

All 2 comments

I also meet this problem, but normal loss is also nan. Have your solved this problem?

I removed Dropout and it fixed it.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

oweingrod picture oweingrod  路  3Comments

vinayakumarr picture vinayakumarr  路  3Comments

LuCeHe picture LuCeHe  路  3Comments

amityaffliction picture amityaffliction  路  3Comments

fredtcaroli picture fredtcaroli  路  3Comments