qlib icon indicating copy to clipboard operation
qlib copied to clipboard

Arguments `lr_decay` and `lr_decay_steps` are not being used in MLP Model

Open igor17400 opened this issue 2 years ago • 3 comments

🐛 Bug Description

Under MLP implementation there are several variables that need to be initialized. Such as loss, lr, lr_decay, lr_decay_steps, optimizer. However, it seems that the variables lr_decay andlr_decay_steps are indeed being initialized but are not being used at any point in the code.

To Reproduce

Steps to reproduce the behavior:

  1. Add under the file workflow_config_mlp_Alpha360 some not expected value into the variables lr_decay andlr_decay_steps such as abc and efg, respectively. Such as the example below
model:
        class: DNNModelPytorch
        module_path: qlib.contrib.model.pytorch_nn
        kwargs:
            loss: mse
            lr: 0.002
            lr_decay: abc
            lr_decay_steps: def
            optimizer: adam
            max_steps: 8000
            batch_size: 4096
            GPU: 0
            pt_model_kwargs:
              input_dim: 360
  1. Execute qrun examples/benchmarks/MLP/workflow_config_mlp_Alpha360_br.yaml
  2. Then, it should execute successfully, which is an akward behavior since the variables lr_decay andlr_decay_steps should be numbers. NOTE, the problem is not with the fact that such variables are not being checked to validate its types before initilization, but with the fact that such variables are not being referenced at any time in the code.

Expected Behavior

If the variables lr_decay andlr_decay_steps were being used and the user passed invalid values it should throw an error.

Screenshot

No need.

Environment

Note: User could run cd scripts && python collect_info.py all under project directory to get system information and paste them here directly.

Linux
x86_64
Linux-5.13.0-39-generic-x86_64-with-glibc2.17
#44~20.04.1-Ubuntu SMP Thu Mar 24 16:43:35 UTC 2022

Python version: 3.8.13 (default, Mar 28 2022, 11:38:47)  [GCC 7.5.0]

Qlib version: 0.8.4.99
numpy==1.22.3
pandas==1.4.2
scipy==1.8.0
requests==2.27.1
sacred==0.8.2
python-socketio==5.5.2
redis==4.2.2
python-redis-lock==3.7.0
schedule==1.1.0
cvxpy==1.2.0
hyperopt==0.1.2
fire==0.4.0
statsmodels==0.13.2
xlrd==2.0.1
plotly==5.7.0
matplotlib==3.5.1
tables==3.7.0
pyyaml==6.0
mlflow==1.24.0
tqdm==4.64.0
loguru==0.6.0
lightgbm==3.3.2
tornado==6.1
joblib==1.1.0
fire==0.4.0
ruamel.yaml==0.17.21

Additional Notes

No need.

igor17400 avatar Apr 15 '22 11:04 igor17400

Thanks for reporting the BUG. Contributions are welcome to enable DNNModelPytorch to leverage these parameters

you-n-g avatar May 06 '22 09:05 you-n-g

@you-n-g I can fix this error. But do you know what does lr_decay and lr_decay_steps mean?

igor17400 avatar May 10 '22 20:05 igor17400

@igor17400 Sorry for the late response.

Please refer to the doc of pytorch. https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html#torch.optim.lr_scheduler.StepLR lr_decay is for gamma and lr_decay_steps is for step_size.

Because we support a more flexible way to pass scheduler (via a callable function), I think we can just delete the two parameters. https://github.com/microsoft/qlib/blob/main/qlib/contrib/model/pytorch_nn.py#L175

you-n-g avatar Jun 06 '22 11:06 you-n-g