PGPortfolio icon indicating copy to clipboard operation
PGPortfolio copied to clipboard

adding new layers and supporting multiple pre-configured networks in net config

Open dlacombejr opened this issue 6 years ago • 4 comments

Major changes include:

  1. Adding support for additional layers such as Batch Normalization and ReLU
  2. Support for multiple pre-configured networks in net_config.json. For example, now net_config.json can look like the following:
{
  "networks": [
    {
      "copies": 1,
      "description": "default network with online and slow train",
      "config": {
        "layers":
        [
          {"filter_shape": [1, 2], "filter_number": 3, "type": "ConvLayer"},
          {"filter_number":10, "type": "EIIE_Dense", "regularizer": "L2", "weight_decay": 5e-9},
          {"type": "EIIE_Output_WithW","regularizer": "L2", "weight_decay": 5e-8}
        ],
        "training":{
          "steps":40000,
          "learning_rate":0.00028,
          "batch_size":109,
          "buffer_biased":5e-5,
          "snap_shot":false,
          "fast_train":false,
          "training_method":"Adam",
          "loss_function":"loss_function6"
        },

        "input":{
          "window_size":31,
          "coin_number":11,
          "global_period":1800,
          "feature_number":3,
          "test_portion":0.08,
          "online":true,
          "start_date":"2018/01/01",
          "end_date":"2018/02/18",
          "volume_average_days":30
        },

        "trading":{
          "trading_consumption":0.0025,
          "rolling_training_steps":85,
          "learning_rate":0.00028,
          "buffer_biased":5e-5
        }
      }
    },
    {
      <additional networks...>
    }
  ]
}

The original net_config.json is still supported.

dlacombejr avatar Feb 19 '18 20:02 dlacombejr

@dlacombejr Hi! Thanks for the PR. I don't quite understand why training input trading are included in the network of new config? I think only layers needs to be changed if testing which network architecture is better.

dexhunter avatar Feb 20 '18 03:02 dexhunter

@DexHunter I guess my intention was really to allow there to be changes to any aspect of the configuration, not just network architecture. This approach leads to lengthy configuration files, but full access to any change in configuration (configurations that are undesired can be commented out because I load them using commentjson -- which I've added to requirements.txt. I suppose the configuration could be set up to support iterating over lists of training, input, trading and layers dictionaries as in a grid search, but this could blow up pretty quickly and you may only want certain combinations. Even though the current approach is more error-prone (e.g., if I want to just change layers between two configs but training is accidentally different), I think it is still preferable because there is full control. If it makes more sense, we can change the networks key to configs or something.

dlacombejr avatar Feb 20 '18 18:02 dlacombejr

Thanks for your contribution again!

Adding support for additional layers such as Batch Normalization and ReLU

Yes, this makes sense.

Support for multiple pre-configured networks in net_config.json. For example, now net_config.json can look like the following:

This implementation is conflict with our automatic hyper-parameters optimization architecture, which is not currently open sourced but it might be released in future. My suggestion is to configure searching space in a separate file or maybe inside generate.py as a temporal solution of grid search.

And it would be nice if you can push the new "dev" branch instead of the master branch.

ZhengyaoJiang avatar Feb 23 '18 10:02 ZhengyaoJiang

@ZhengyaoJiang, will you be releasing a hyperparameter optimiser? I've seen Bayesian Optimizers work better for this task over grid search or random search

sam-moreton avatar May 16 '18 21:05 sam-moreton