DI-engine
DI-engine copied to clipboard
Training the gym-hybrid using hppo
Hi I am new to DI-engine and trying to train the gym hybrid "thomashirtz" agent using HPPO. could you please explain me what are config,entry, envs folders are mentioning under DI-engine/dizoo/gym_hybrid/??
- envs: the wrapper which transform original gym env into DI-engine's format
- config: the configuration file which contains the setting of
Algorithm + Env
(nested dict) and the default main function to run this config - entry: another main functions for running corresponding config
we suggest that you should implement your own main function for your demand, and you can imitate the following two files:
-
dizoo/gym_hybrid/entry/gym_hybrid_ddpg_main.py
(for env) -
ding/entry/serial_entry_onpolicy.py
(for HPPO on policy algorithm)
Could you provide a hppo main function file ,I am new to DI-engine and H-PPO, thank you very much.
- envs: the wrapper which transform original gym env into DI-engine's format
- config: the configuration file which contains the setting of
Algorithm + Env
(nested dict) and the default main function to run this config- entry: another main functions for running corresponding config
we suggest that you should implement your own main function for your demand, and you can imitate the following two files:
dizoo/gym_hybrid/entry/gym_hybrid_ddpg_main.py
(for env)ding/entry/serial_entry_onpolicy.py
(for HPPO on policy algorithm)