Keras: Epoch is not working properly for KerasRegressor

Created on 5 May 2017  Â·  4Comments  Â·  Source: keras-team/keras

I'm just getting start with Keras using tensorflow, In the code code I have specified nb_epoch=100 but it's not running for 100 epoch, instead it's executing only for 10 epoch. I wonder is there any bug in my code. Also attaching the output in the bottom

Code:

import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
from utils import encode_numeric_zscore, to_xy

dataset = pd.read_csv('data/boston-housing.csv', dtype=np.float32)

for col in dataset.columns:
    if col != 'medv':
        encode_numeric_zscore(dataset, col)

features, target = to_xy(dataset, 'medv')

X_train, X_test, Y_train, Y_test = train_test_split(features, target, test_size=0.25, random_state=42)


def chart_regression(pred, y):
    t = pd.DataFrame({'pred': pred, 'y': Y_test.flatten()})
    t.sort_values(by=['y'], inplace=True)
    a = plt.plot(t['y'].tolist(), label='expected')
    b = plt.plot(t['pred'].tolist(), label='prediction')
    plt.ylabel('output')
    plt.legend()
    plt.show()


from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Dense, Dropout


def build_nn():
    model = Sequential()
    model.add(Dense(75, input_shape=(13,), activation="relu", kernel_initializer="glorot_uniform"))
    model.add(Dense(55, activation="relu"))
    model.add(Dropout(0.005))
    model.add(Dense(35, activation="relu"))
    model.add(Dropout(0.005))
    model.add(Dense(11, activation="relu"))
    model.add(Dropout(0.005))
    model.add(Dense(1, activation='relu', kernel_initializer="normal"))
    model.compile(loss='mean_squared_error', optimizer='adam')
    return model


seed = 7
np.random.seed(seed)

from tensorflow.contrib.keras.api.keras.wrappers.scikit_learn import KerasRegressor

regressor = KerasRegressor(build_fn=build_nn, nb_epoch=100, batch_size=1, verbose=1)

regressor.fit(X_train, Y_train)

prediction = regressor.predict(X_test, batch_size=1)

score = metrics.mean_squared_error(Y_test, prediction)

print('\n')

print("Final score (MSE): {}".format(score))

print('\n')

score = np.sqrt(metrics.mean_squared_error(prediction, Y_test))
print("Final score (RMSE): {}".format(score))

chart_regression(prediction, Y_test)

Output:

Epoch 1/10
2017-05-05 15:30:37.711608: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
2017-05-05 15:30:37.711981: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-05 15:30:37.712355: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-05 15:30:37.712721: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-05 15:30:37.713094: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-05 15:30:37.713463: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-05-05 15:30:37.713839: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-05 15:30:37.714213: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
  1/379 [..............................] - ETA: 107s - loss: 492.8400
 60/379 [===>..........................] - ETA: 1s - loss: 669.9998  
109/379 [=======>......................] - ETA: 0s - loss: 625.8113
172/379 [============>.................] - ETA: 0s - loss: 461.0169
233/379 [=================>............] - ETA: 0s - loss: 353.9863
286/379 [=====================>........] - ETA: 0s - loss: 294.3637
336/379 [=========================>....] - ETA: 0s - loss: 254.4181
379/379 [==============================] - 0s - loss: 230.0349     
Epoch 2/10
  1/379 [..............................] - ETA: 0s - loss: 8.0337
 55/379 [===>..........................] - ETA: 0s - loss: 26.7809
104/379 [=======>......................] - ETA: 0s - loss: 22.0314
171/379 [============>.................] - ETA: 0s - loss: 25.4361
219/379 [================>.............] - ETA: 0s - loss: 28.0947
285/379 [=====================>........] - ETA: 0s - loss: 26.3540
338/379 [=========================>....] - ETA: 0s - loss: 26.1902
379/379 [==============================] - 0s - loss: 24.4095     
Epoch 3/10
  1/379 [..............................] - ETA: 0s - loss: 9.8365
 42/379 [==>...........................] - ETA: 0s - loss: 20.2841
108/379 [=======>......................] - ETA: 0s - loss: 16.0308
161/379 [===========>..................] - ETA: 0s - loss: 18.4792
225/379 [================>.............] - ETA: 0s - loss: 18.9558
288/379 [=====================>........] - ETA: 0s - loss: 16.6939
379/379 [==============================] - 0s - loss: 18.3086     
Epoch 4/10
  1/379 [..............................] - ETA: 0s - loss: 13.7463
 59/379 [===>..........................] - ETA: 0s - loss: 12.0900
115/379 [========>.....................] - ETA: 0s - loss: 18.4202
181/379 [=============>................] - ETA: 0s - loss: 16.6087
244/379 [==================>...........] - ETA: 0s - loss: 15.0061
302/379 [======================>.......] - ETA: 0s - loss: 13.2268
379/379 [==============================] - 0s - loss: 16.0613     
Epoch 5/10
  1/379 [..............................] - ETA: 0s - loss: 25.8912
 65/379 [====>.........................] - ETA: 0s - loss: 28.8039
123/379 [========>.....................] - ETA: 0s - loss: 21.3528
199/379 [==============>...............] - ETA: 0s - loss: 17.8586
256/379 [===================>..........] - ETA: 0s - loss: 19.4171
327/379 [========================>.....] - ETA: 0s - loss: 17.9864
379/379 [==============================] - 0s - loss: 18.2714     
Epoch 6/10
  1/379 [..............................] - ETA: 0s - loss: 2.0872
 61/379 [===>..........................] - ETA: 0s - loss: 18.4408
115/379 [========>.....................] - ETA: 0s - loss: 16.9127
184/379 [=============>................] - ETA: 0s - loss: 16.9419
247/379 [==================>...........] - ETA: 0s - loss: 16.2630
314/379 [=======================>......] - ETA: 0s - loss: 14.9947
379/379 [==============================] - 0s - loss: 13.5761     
Epoch 7/10
  1/379 [..............................] - ETA: 0s - loss: 0.3704
 59/379 [===>..........................] - ETA: 0s - loss: 15.2191
118/379 [========>.....................] - ETA: 0s - loss: 11.8381
188/379 [=============>................] - ETA: 0s - loss: 10.9623
247/379 [==================>...........] - ETA: 0s - loss: 10.0409
313/379 [=======================>......] - ETA: 0s - loss: 12.4478
379/379 [==============================] - 0s - loss: 13.8573     
Epoch 8/10
  1/379 [..............................] - ETA: 0s - loss: 1.2173
 44/379 [==>...........................] - ETA: 0s - loss: 12.5116
 96/379 [======>.......................] - ETA: 0s - loss: 10.2578
170/379 [============>.................] - ETA: 0s - loss: 11.9057
227/379 [================>.............] - ETA: 0s - loss: 12.1961
287/379 [=====================>........] - ETA: 0s - loss: 14.4951
341/379 [=========================>....] - ETA: 0s - loss: 13.7772
379/379 [==============================] - 0s - loss: 13.2340     
Epoch 9/10
  1/379 [..............................] - ETA: 0s - loss: 0.1982
 53/379 [===>..........................] - ETA: 0s - loss: 9.8250
124/379 [========>.....................] - ETA: 0s - loss: 9.8112
175/379 [============>.................] - ETA: 0s - loss: 10.9075
238/379 [=================>............] - ETA: 0s - loss: 11.3589
303/379 [======================>.......] - ETA: 0s - loss: 13.8970
379/379 [==============================] - 0s - loss: 14.5469     
Epoch 10/10
  1/379 [..............................] - ETA: 0s - loss: 3.7164
 49/379 [==>...........................] - ETA: 0s - loss: 14.6596
116/379 [========>.....................] - ETA: 0s - loss: 15.5960
172/379 [============>.................] - ETA: 0s - loss: 13.6525
223/379 [================>.............] - ETA: 0s - loss: 12.9588
275/379 [====================>.........] - ETA: 0s - loss: 12.4905
347/379 [==========================>...] - ETA: 0s - loss: 13.5629
379/379 [==============================] - 0s - loss: 13.1696     
  1/127 [..............................] - ETA: 1s

Final score (MSE): 11.964310646057129


Final score (RMSE): 3.458946466445923

Most helpful comment

We should use the keyword epochs, not _nb_epoch_ (which is deprecated).

All 4 comments

Should be epochs, not nb_epochs? I think

The new API is epochs. Please take a look at the code of KerasRegressor
to try to identity why nb_epoch is not taken into account (it should be
backwards compatible I think).

On 5 May 2017 at 11:41, Frédéric Branchaud-Charron <[email protected]

wrote:

Should be epochs, not nb_epochs? I think

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/6521#issuecomment-299544064,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AArWb69ehBIHjF3u5oHAPJz1VAHkckKLks5r220-gaJpZM4NRvbd
.

It gets filtered out by filter_sk_params in keras.wrapper.sequentials. since nb_epochs is in the kwargs. Fixing this would be ugly...

We should use the keyword epochs, not _nb_epoch_ (which is deprecated).

Was this page helpful?
0 / 5 - 0 ratings

Related issues

vinayakumarr picture vinayakumarr  Â·  3Comments

Imorton-zd picture Imorton-zd  Â·  3Comments

fredtcaroli picture fredtcaroli  Â·  3Comments

kylemcdonald picture kylemcdonald  Â·  3Comments

farizrahman4u picture farizrahman4u  Â·  3Comments