model.fit(val_x, val_y, batch_size=batchsize, epochs=epoch, validation_data=(val_x, val_y), shuffle=False)
Epoch 7/8
32/500 [>.............................] - ETA: 6s - loss: 3.6637 - acc: 0.7500
64/500 [==>...........................] - ETA: 5s - loss: 3.7996 - acc: 0.6406
96/500 [====>.........................] - ETA: 5s - loss: 3.8429 - acc: 0.5938
128/500 [======>.......................] - ETA: 5s - loss: 3.9020 - acc: 0.5078
160/500 [========>.....................] - ETA: 4s - loss: 3.9094 - acc: 0.5000
192/500 [==========>...................] - ETA: 4s - loss: 3.9459 - acc: 0.4688
224/500 [============>.................] - ETA: 3s - loss: 3.9460 - acc: 0.4509
256/500 [==============>...............] - ETA: 3s - loss: 3.9373 - acc: 0.4531
288/500 [================>.............] - ETA: 3s - loss: 3.9160 - acc: 0.4618
320/500 [==================>...........] - ETA: 2s - loss: 3.9127 - acc: 0.4656
352/500 [====================>.........] - ETA: 2s - loss: 3.9094 - acc: 0.4574
384/500 [======================>.......] - ETA: 1s - loss: 3.8988 - acc: 0.4661
416/500 [=======================>......] - ETA: 1s - loss: 3.9018 - acc: 0.4567
448/500 [=========================>....] - ETA: 0s - loss: 3.9054 - acc: 0.4442
480/500 [===========================>..] - ETA: 0s - loss: 3.8961 - acc: 0.4458
500/500 [==============================] - 13s 27ms/step - loss: 3.8861 - acc: 0.4580 - val_loss: 4.2624 - val_acc: 0.1000
model.fit(val_x, val_y, batch_size=batchsize, epochs=epoch, validation_data=(val_x, val_y), shuffle=False)
for i in range(1):
print('epoch------------------------------------ ', i)
for j in range(len(val_y)//batchsize):
print('part: ', j)
x, y = val_x[j*batchsize: (j+1)*batchsize], val_y[j*batchsize: (j+1)*batchsize]
c = model.train_on_batch(x, y)
c0 = model.test_on_batch(x, y)
c1 = model.evaluate(val_x, val_y, verbose=0)
print('train', c, c0, '///', 'test', c1)
print(model.evaluate(val_x, val_y, verbose=0))
epoch------------------------------------ 0
part: 0
train [2.9277172, 0.96875] [4.16498, 0.28125] /// test [4.177521802902222, 0.10400000005960465]
part: 1
train [3.3343606, 0.6875] [4.322627, 0.0] /// test [4.172436357498169, 0.10000000005960465]
part: 2
train [3.4079463, 0.65625] [4.1922503, 0.0625] /// test [4.1690940742492675, 0.09600000005960464]
part: 3
train [3.5834055, 0.59375] [4.164328, 0.09375] /// test [4.167951114654541, 0.09400000023841858]
part: 4
train [3.450543, 0.75] [4.1976447, 0.03125] /// test [4.164755859375, 0.09200000023841857]
part: 5
train [3.723071, 0.28125] [4.3880234, 0.0625] /// test [4.160240522384644, 0.09600000011920928]
part: 6
train [3.4545927, 0.5625] [4.2311635, 0.15625] /// test [4.1561575050354005, 0.10200000011920929]
part: 7
train [3.356852, 0.625] [4.2445297, 0.03125] /// test [4.149132749557495, 0.11000000023841858]
part: 8
train [3.20354, 0.6875] [4.137046, 0.0625] /// test [4.144430877685547, 0.10800000005960464]
part: 9
train [3.3445919, 0.65625] [4.1739435, 0.15625] /// test [4.139644777297973, 0.10600000005960465]
part: 10
train [3.3949661, 0.53125] [4.1948667, 0.0625] /// test [4.133912534713745, 0.10400000002980232]
part: 11
train [3.1753829, 0.84375] [4.176406, 0.15625] /// test [4.128459690093994, 0.10600000002980232]
part: 12
train [3.396451, 0.78125] [4.212875, 0.0] /// test [4.122593105316162, 0.10400000002980232]
part: 13
train [3.3684978, 0.53125] [4.2350006, 0.03125] /// test [4.115411529541015, 0.10800000002980233]
part: 14
train [3.1372843, 0.59375] [4.027099, 0.21875] /// test [4.108483642578125, 0.10800000002980233]
[4.108483642578125, 0.10800000002980233]
the "acc" and "val_acc" are quite different.
'acc' is the training accuracy. It's the average of the accuracy values for each batch of training data during training. Note that because the model gets better during training, the accuracy for early batches is lower than the accuracy for later batches. This causes the average accuracy (what is reported) to be lower overall.
'val_acc' is the validation accuracy. It's computed on the validation data (so not the same data), and it corresponds to the state of the model at the end of training.
So "acc" and "val_acc" are expected to be different. Usually, at the beginning of training, "val_acc" may be higher than "acc" because the model gets better during training (as stated above). After a while though, your model will start overfitting and "acc" will be better than "val_acc".
But here I set the same data as the training data and validation data, acc and val_acc should not be so different, ...so I done the next test
Most helpful comment
'acc' is the training accuracy. It's the average of the accuracy values for each batch of training data during training. Note that because the model gets better during training, the accuracy for early batches is lower than the accuracy for later batches. This causes the average accuracy (what is reported) to be lower overall.
'val_acc' is the validation accuracy. It's computed on the validation data (so not the same data), and it corresponds to the state of the model at the end of training.
So "acc" and "val_acc" are expected to be different. Usually, at the beginning of training, "val_acc" may be higher than "acc" because the model gets better during training (as stated above). After a while though, your model will start overfitting and "acc" will be better than "val_acc".