Does LGB support dynamic learning rate?
Can lr be smaller and smaller during the training process?
And I just updated lgb from 2.3.1 to 3.0.0, does the old param "max_position" equal to "lambdarank_truncation_level"?
Does LGB support dynamic learning rate?
Yes, it does.
learning_rates (list, callable or None, optional (default=None)) – List of learning rates for each boosting round or a customized function that calculates learning_rate in terms of current number of round (e.g. yields learning rate decay).
https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.train.html#lightgbm.train
does the old param "max_position" equal to "lambdarank_truncation_level"?
Yes, formerly this param was named max_position. Please refer to https://github.com/microsoft/LightGBM/pull/2801/files.
我最近也遇到了这个问题,感谢
Most helpful comment
Yes, it does.
Yes, formerly this param was named
max_position. Please refer to https://github.com/microsoft/LightGBM/pull/2801/files.