I am using lightgbm for modelling a variable which is poisson distributed.
For this variable I have a bias provided by the logarithm of "e", where "e" is one of the predictors.
If I have understood properly, I could set the init score as the logarithm of "e" to train lightgbm.
Unluckily I cannot use any init score for the predictions since the predict() method does not support the Dataset object.
I wonder how is this "init score" is implemented? And why there is no need of using it for the predictions?
Thanks in advance
basically, lightGBM will train from the residual of a certain dataset in each round of training to build a weak tree model, and the init_score indicates the first round of these residual values.
practically speaking, the `init_score' was used in the scenario that you need further training your model against some dataset, and use it to represent the previous model.
For predictions, you should add initial scores by yourself
For predictions, you should add initial scores by yourself
If I am understanding properly, the scores I obtain from the predictions of a model trained with init_score would be:
init_score + predictions = final_predictions
Is it correct?
yeah, and you should use the raw_score in prediction. and do your own sigmoid/softmax transform if in binary/multi classifications?
yeah, and you should use the
raw_scorein prediction. and do your own sigmoid/softmax transform if in binary/multi classifications?
不好意思,关于这个问题想再请教一下。
1、init_score 使用之前学习器的预测的类别还是类别对应概率?
2、final_predictions = sigmoid(lgb 的 raw_score + 之前学习器的 raw_score ) ?
谢谢。
@JYLFamily
@JYLFamily
- init_score is the raw score, before any transformation.
- yes
再请问一下,lgb.predict(raw_score=True) == lgb 的 raw_score + 之前学习器的 raw_score 还是 lgb.predict(raw_score=True) == lgb 的 raw_score
十分感谢。
@JYLFamily The prediction result doesn't include the init_score, you should add it by yourself.
Most helpful comment
@JYLFamily The prediction result doesn't include the init_score, you should add it by yourself.