The error of my problem is around or below 0.01. As I am using mean_squared_error, the progress bar is on longer showing enough digits after the 2nd epoch as example below:
Epoch 1/20
819007/819007 [==============================] - 123s - loss: 0.0002 - val_loss: 0.0001
Epoch 2/20
819007/819007 [==============================] - 127s - loss: 0.0001 - val_loss: 0.0001
Epoch 3/20
819007/819007 [==============================] - 126s - loss: 0.0001 - val_loss: 0.0001
Epoch 4/20
819007/819007 [==============================] - 127s - loss: 0.0001 - val_loss: 0.0001
You can for example create a callback for on_epoch_end and print the loss yourself with whatever precision you want.
Yes indeed. Still it would be good that the progress bar can automatically use scientific notation or other way to show the extra digits in this case.
Another trivial solution is to increase your loss by an arbitrary factor
(e.g. x1000), by using a custom loss:
loss = lambda y_true, y_pred: 1000 * mse(y_true, y_pred)
On 7 December 2015 at 11:56, Cheng Guo [email protected] wrote:
Yes indeed. Still it would be good that the progress bar can automatically
use scientific notation or other way to show the extra digits in this case.—
Reply to this email directly or view it on GitHub
https://github.com/fchollet/keras/issues/1203#issuecomment-162641440.
I agree with @entron. What's the benefit of showing this useless output by default? Sure, it's easy to find a workaround for it, but I think that misses the point. I'd also go with scientific notation. Don't make common use cases harder than they really are.
Thanks a lot for the answers and the quick fix! I am really amazed by the Keras community!
Most helpful comment
I agree with @entron. What's the benefit of showing this useless output by default? Sure, it's easy to find a workaround for it, but I think that misses the point. I'd also go with scientific notation. Don't make common use cases harder than they really are.