@dmlc/xgboost-committer please add your items here by editing this post. Let's ensure that
For other contributors who have no permission to edit the post, please comment here about what you think should be in 1.1.0.
setup.py
(#5271, #5280)Proposal: this time, let us not wait too long until the next release.
@JohnZed @datametrician
Please Add Survival analysis objective: Accelerated Failure Time https://github.com/dmlc/xgboost/pull/4763
LGTM especially #5023. Thanks @hcho3!
@avinashbarnwal See the first item in the list.
@hcho3 would be great if we can look into https://github.com/apache/incubator-tvm/issues/4953
@tqchen I will look.
@CodingCat can please you help us resolve the last JVM issue for the release?
Reverting the status for DMatrix refactoring.
@trivialfis Is DMatrix refactor blocking 1.1.0 release?
@hcho3 I just want https://github.com/dmlc/xgboost/pull/5504 before 1.1, the PR fixes prediction on device dmatrix which is part of dmatrix refactoring. Also, the weighted sketching is not yet implemented for device dmatrix, but this is not blocking.
Also please let me give a deeper look into https://github.com/dmlc/xgboost/issues/5285 . Will continue profiling in coming days.
Got it. I am now reviewing #5123.
Labelled https://github.com/dmlc/xgboost/issues/5529 as breaking.
@tqchen I cannot reproduce the issue in apache/incubator-tvm#4953.
I cannot reproduce the issue in apache/incubator-tvm#4953.
@tqchen Neither can I.
@hcho3 I'm happy to make the next release.
@hcho3 All blocking bugs are closed. Can we branch out? What can I help with the release process?
@trivialfis If you can provide a summary of your contribution that would be great. I will create a new branch.
@hcho3 Here is a list of PRs that are related to me, I omitted some trivial changes.
xgboost[scikit-learn]
instead of xgboost[sklearn]
(#5310).nan
. (#5538)xgb.config
to get a JSON representation of internalbst$raw
. (#5123)xgb.Booster.complete
. (#5573)JVM_CHECK_CALL
macro in JVM C++ wrapper, this avoids some segfaults whendmlc::Error
is thrown in C++. (#5199)Refactor prediction cache. (#5302, #5220, #5312)
Now XGBoost caches all DMatrix, and release the cache once DMatrix is expired.
This way users no longer have to delete the booster before deleting DMatrix.
Also the caching logic is simplified.
Run GPU prediction on Ellpack page, which is part of the DMatrix refactoring. (#5327, #5504)
LearnerImpl
. (#5350)cuda-memcheck
. (#5441)DMLC_TASK_ID
for rabit initialization for better logging messages. (#5415)nthreads
from dask worker. (#5414)DaskDMatrix
for prediction, now dask package can return aThread safe, inplace prediction. (#5396, #5389, #5512)
Now users can use inplace_predict
on Python (including dask) and C for thread safe, lock free
prediction on both CPU and GPU inputs.
[Breaking] silent
parameter is completely removed. Setting it will no-longer have any
effect.
[Breaking] Set output margin to True for custom objective. (#5564)
Now both R and Python interface custom objectives get un-transformed prediction
outputs.
I'll create new release branch after #5577.
All blockers have been addressed. I will start a new release branch.
1.1.0 is now released.
Most helpful comment
Proposal: this time, let us not wait too long until the next release.