xgboost.core.XGBoostError: label must be in [0,1] for logistic regression

Created on 14 Jul 2016  路  2Comments  路  Source: dmlc/xgboost

So when I try to train using the external memory option, I get this error. Also notable is this before the label error:
"Writting to dtrain.cache.col.blob in 0 MB/s, 0 MB written" multiple times.

Traceback (most recent call last): File "cache_train.py", line 21, in <module> bst = xgb.train(param, dtrain, num_round, evallist) File "/usr/lib/python2.6/site-packages/xgboost/training.py", line 121, in train bst.update(dtrain, i, obj) File "/usr/lib/python2.6/site-packages/xgboost/core.py", line 694, in update _check_call(_LIB.XGBoosterUpdateOneIter(self.handle, iteration, dtrain.handle)) File "/usr/lib/python2.6/site-packages/xgboost/core.py", line 97, in _check_call raise XGBoostError(_LIB.XGBGetLastError()) xgboost.core.XGBoostError: label must be in [0,1] for logistic regression


My code:

import xgboost as xgb
import time

print("Loading dtrain")
start = time.clock()
dtrain = xgb.DMatrix('train_features_123_1245.txt#dtrain.cache')
end = time.clock()
print("Loaded dtrain in " + str(end - start))
print("Loading dtest")
start = time.clock()
dtest = xgb.DMatrix('test_features_123_1245.txt#dtest.cache')
end = time.clock()
print("Loaded dtest in " + str(end - start))
param = {'bst:max_depth':2, 'bst:eta':1, 'silent':1, 'objective':'binary:logistic' }
param['nthread'] = 8
param['eval_metric'] = 'auc'
evallist  = [(dtest,'eval'), (dtrain,'train')]
num_round = 10
print("Training")
start = time.clock()
bst = xgb.train(param, dtrain, num_round, evallist)
end = time.clock()
print("Trained in " + str(end-start))
print("Dumping model")
start = time.clock()
bst.save_model('0001.model')
end = time.clock()
print("Dumped in " + str(end-start))

Any ideas?

All 2 comments

Old question, no answer. Closing.

bro i got the same problem but i didnt get any solution

Was this page helpful?
0 / 5 - 0 ratings