relbench
relbench copied to clipboard
Predict column task
This PR adds a general Predict Column task, with which we can predict any column in any table, if that table has a time column present by which the data is split into train/val/test. The work was done at the Stanford University in collaboration with @rishabh-ranjan and Prof. Jure Leskovec.
The task is constructed by specifying a predict_column_task_config dictionary, which needs the following keys:
- "entity_table": The name of the entity table.
- "entity_col": The name of the entity (id) column. Can be None if the entity table has no id column.
- "time_col": The name of the time column by which the data is split into training and validation set.
- "target_col": The name of the target column to be predicted.
The target table is then constructed by removing the target column from the dataset when initializing the dataset, while at the same time saving it to the database db.table_dict[entity_table].removed_cols. Then, in the make_table function, the saved target column is joined to the id and time column of the entity table.
When specifying the task, we pass the predict_column_task_config to the PredictColumnTask class (code copied from examples/gnn_predict_column.py:
predict_column_task_config = {
"task_type": TaskType[args.task_type],
"entity_table": args.entity_table,
"entity_col": args.entity_col if args.entity_col else None,
"time_col": args.time_col,
"target_col": args.target_col,
}
dataset: Dataset = get_dataset(args.dataset, download=True)
dataset.target_col = args.target_col
dataset.entity_table = args.entity_table
task = PredictColumnTask(dataset=dataset, **predict_column_task_config)
Examples
In the examples directory, three files are added (baseline_predict_column.py, gnn_predict_column.py and lightgbm_predict_column.py).
F1: predict the position of the driver at the end of the race
The config for this task is the default in the example scripts.
Config:
predict_column_task_config = {
"task_type": "REGRESSION",
"entity_table": "results",
"entity_col": "resultId",
"time_col": "date",
"target_col": "position",
}
Results:
BASELINE global mean:
Test: {'r2': -0.17577459931473594, 'mae': 4.8224954508192335, 'rmse': 5.808236989872365}
LIGHTGBM:
Test: {'r2': 0.44705600084929853, 'mae': 2.768755484772649, 'rmse': 3.9831151885538802}
GNN:
Test metrics: {'r2': 0.792852150915373, 'mae': 1.8071745650974003, 'rmse': 2.437937151125567}
Thanks for the PR @martinjurkovic ! Can you please address some comments:
- [ ] remove
entity_colandtime_colbecause they are redundant.entity_colcan be obtained asentity_table.pkey_colandtime_colcan be obtained asentity_table.time_col. - [ ] instead of making a new experiments folder, can you add your scripts to
examples/pred_col? Also add a few lines of description toexamples/pred_col/README.mdincluding example commands to run and example outputs. - [ ] We also need multiple column support. Can you add that along with up-to-date documentation?
Hi @rishabh-ranjan, thanks for the comments!
- I will update the time and entity columns.
- As for the
experimentsdirectory, I meantexamples. I will however create a new subdirectory inside as you suggested. - For the multiple column support, I recommend we first merge one column support and then add the multicolumn functionality on top of that, since there needs to be some additional discussion on how to implement that (how will the loss and metrics be calculated for multiple columns, especially since one column can be numerical, other categorical...). I also believe the GNN has to be adapted so that it can predict multiple columns.
@rishabh-ranjan Hey Rishabh, hope you're doing well! I have resolved a small bug and the task as is is ready to be merged.
@ValterH has QA'd the task as he is using it for his work. The task should be merged into main also for him to be able to build on top of this and open his PRs.
Thanks @martinjurkovic ! I am in touch with @ValterH regarding this, and plan to review and merge soon.
Hi @martinjurkovic , I am closing this PR. I have merged @ValterH 's PR (#305) which includes your work. You are listed as a coauthor on this PR (in fact, the majority of commits are yours), so your contribution is tracked by GitHub. Thanks for working on this!