Hello,
I am using a graph model with one input and multiple outputs and I want to access epoch number inside a custom loss function :
def alphabinary(alpha):
def binary_cross(y_true, y_pred):
return alpha * K.mean(K.binary_crossentropy(y_pred, y_true), axis=-1)
return binary_cross
My objective is to make the "alpha" parameter into alpha(epoch).
I was able to do it outside the loss by recompiling but my code is kinda ugly so I am looking for a better way.
(PS: My graph model is doing target replication : Deeply-Supervised Nets )
Thank you very much for your help and have a nice day !
I have the same question! It would be great if someone can share the answer to it.
I think you need Callback, and you can try this code.
alpha = K.variable(1.)
class NewCallback(Callback):
def __init__(self, alpha):
self.alpha = alpha
def on_epoch_end(self, epoch, logs={}):
K.set_value(self.alpha, K.get_value(self.alpha) * epoch**0.95)
model.fit(...,callbacks=[NewCallback(alpha), ],...)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but feel free to re-open it if needed.
I had the same question, but found a working answer here:
https://stackoverflow.com/questions/42995711/keras-epoch-dependant-loss-function?rq=1
Just recompile the model whenever alpha changes. Recompiling takes some time, so it won't be ideal if alpha changes every epoch (mine only changes several times across training)
@lovecambi Your suggestion is way more elegant than recompiling the model, and it works. Though I think it should be 0.95 ** epoch rather than epoch ** 0.95. Thank you!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
hi @lovecambi @JihongJu, I also want to use the callback to update my loss function based on the epoch. In @lovecambi 's code, I'm confused as to how the alpha in the callbacks is being assigned to the alpha in your loss function. Would you mind explaining the code if possible?
hi @nabsabraham, did you finally make it work the solution by @lovecambi ? I'm also trying to implement it. The callback seems to update self.alpha every epoch correctly. But the update does not reach my custom loss function. Instead, alpha keeps the initialization value forever.
@CVxTz how did you pass your custom alphabinary(alpha) loss function at compilation time? Doing as follows does not seem to work for me.
alpha = K.variable(1.)
model.compile(optimizer=opt, loss=alphabinary(alpha), metrics=...)
thanks!
sorry @edufonseca, i had the same question you had and could not resolve how to do this in keras. In pytorch, this is much simpler
Did anyone manage to make this work? Having a loss function which depends on the epoch would be really interesting for my research!
hi@nabsabraham did u find any solution? I also want to use the callback to update my loss function based on the epoch.
I did not find a solution and moved to PyTorch where this is a very simple fix.
This answer solved my problem.
https://stackoverflow.com/a/52713046/4755986
It works for me using a callback and a function closure. Check the repo below for working code.
callback:
https://github.com/edufonseca/waspaa19/blob/8c5dd31922911be861f7c5f4dcc7eec058c8ad67/main.py#L464
compile model within a class:
https://github.com/edufonseca/waspaa19/blob/8c5dd31922911be861f7c5f4dcc7eec058c8ad67/loss_time.py#L35
function closure:
https://github.com/edufonseca/waspaa19/blob/8c5dd31922911be861f7c5f4dcc7eec058c8ad67/loss_time.py#L123
hth
Most helpful comment
I think you need Callback, and you can try this code.