support sequence interface like lightgbm.Sequence
Is it possible to support sequence interface (an object with __getitem__ and __len__) in DMatrix without copying data.
is there any use case that numpy/pandas and alike is not a better alternative?
For timeseries data like stock exchange data, to predict the next server days return. Saying that there are 100 features, and then rolling the data with 20 days. In order to fit DMatrix, we have to shift the features 20 times, and the memory usage become 20x. Actually most of the data are duplicated. If I can define custom __getitem__, it will highly reduce the memory usage.
BTW, please do me a favor, check #9625.
Currently, can can consume data in batch by using the callback function, I took a quick look into LGB, which implements the from_seq with a function named _push_rows. I assume that's similar to the callback function we use in terms of the underlying mechanism.
See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/quantile_data_iterator.py
BTW, please do me a favor, check https://github.com/dmlc/xgboost/issues/9625.
sure, will look into it.
Currently, can can consume data in batch by using the callback function, I took a quick look into LGB, which implements the
from_seqwith a function named_push_rows. I assume that's similar to the callback function we use in terms of the underlying mechanism.See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/quantile_data_iterator.py
I have looked into this demo code. It looks like that the QuantileDMatrix consume all the dataiter more than one time (4 in my case). As a quantile structure, this will save much memory. But for ranking problem, how to set group weight if neccessary. My original demands is that the data consuming at the training stage, not the QuantileDMatrix.__init__ stage.
One more thing, there is a little trap one may fall into if not careful.
# run in version 1.7.6
import numpy as np
import pandas as pd
import xgboost as xgb
np.random.seed(42)
n_groups = 100
group_size = 2000
n_features = 10
n_levels = 20
rows = n_groups * group_size
features = pd.DataFrame(np.random.randn(rows, n_features).astype('float32'), columns=[f'f{i:03d}' for i in range(n_features)])
qids = pd.Series(np.arange(rows, dtype='int') // group_size)
labels = pd.Series(np.random.randn(rows).astype('float32')).groupby(qids).rank(method='first').sub(1) // (group_size // n_levels)
weights = np.arange(1, 101)
# dmatrix = xgb.DMatrix(features, label=labels, qid=qids)
qmatrix = xgb.QuantileDMatrix(features, label=labels, qid=qids)
sub_rows = 10000
sub_qmatrix = xgb.QuantileDMatrix(features.tail(sub_rows))
sub_dmatrix = xgb.DMatrix(features.tail(sub_rows))
params = {
'objective': 'rank:pairwise',
# 'objective': 'multi:softprob',
# 'num_class': n_levels,
'base_score': 0.5,
# 'lambdarank_pair_method': 'mean',
# 'lambdarank_num_pair_per_sample': 1,
'booster': 'gbtree',
'tree_method': 'hist',
'verbosity': 1,
# 'seed': 42,
'learning_rate': 0.1,
'max_depth': 6,
'gamma': 1,
'min_child_weight': 4,
'subsample': 0.9,
'colsample_bytree': 0.7,
'nthread': 20,
'reg_lambda': 1,
'reg_alpha': 1,
'eval_metric': ['ndcg@100', 'ndcg@500', 'ndcg@1000'],
}
booster = xgb.train(params, qmatrix, 100, verbose_eval=10, evals=[(qmatrix, 'train')])
preds_d = booster.predict(sub_dmatrix)
preds_q = booster.predict(sub_qmatrix)
preds_o = booster.predict(qmatrix)[-sub_rows:]
assert np.allclose(preds_d, preds_q) # False
assert np.allclose(preds_o, preds_q) # False
assert np.allclose(preds_o, preds_d) # True
The script above will raise error. So if one train booster with QuantileDMatrix and then predict with QuantileDMatrix that is not origin from the training one, wrong predition might occurs, since the hist seperation points changed I guess.
There's a ref parameter for the QuantileDMatrix, which is used for getting the histogram cuts from the training data.
consume all the dataiter more than one time
Yes. That's expected, we need to iterate over the batches to gather needed statistics.
But for ranking problem, how to set group weight if neccessary
Pass the weight in the next method as in the example.
My original demands is that the data consuming at the training stage, not the QuantileDMatrix.init stage.
I don't think that's possible for any gbm implementation at the moment, assuming you want to generate the batches (rolling window) on demand. XGBoost has external memory support, please find it in the document, which might help reduce memory usage.
Feel free to reopen if there are further questions