What is the difference between the loss_weights argument, found in the compile function (compile(self, optimizer, loss=None, metrics=None, loss_weights=None)) and the class_weight argument in the "fit" method (fit(self, x=None, y=None, batch_size=None, class_weight = None))?
loss_weights parameter on compile is used to define how much each of your model output loss contributes to the final loss value ie. it weighs the model output losses. You could have a model with 2 outputs where one is the primary output and the other auxiliary. eg. 1. * primary + 0.3 * auxiliary. The default values for loss weights is 1.
class_weight parameter on fit is used to weigh the importance of each sample based on the class they belong to, during training. This is typically used when you have an uneven distribution of samples per class.
How do I use loss_weights parameter to weight each of the training examples differently while calculating the loss function?
@xaram
How do I use
loss_weightsparameter to weight each of the training examples differently while calculating the loss function?
You don't, you use sample_weights https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit alongside the X and Y data
Most helpful comment
loss_weightsparameter oncompileis used to define how much each of your model output loss contributes to the final loss value ie. it weighs the model output losses. You could have a model with 2 outputs where one is the primary output and the other auxiliary. eg. 1. * primary + 0.3 * auxiliary. The default values for loss weights is 1.class_weightparameter onfitis used to weigh the importance of each sample based on the class they belong to, during training. This is typically used when you have an uneven distribution of samples per class.