Dhyey Thumar
Dhyey Thumar
Hi @Pimool, Check the ml-agents version (the environment given in this repo was built for release_1). Also, it seems your training exited with a critical error suggested by the SIGABRT...
Then most probably the environment is exiting with an error, it's possible that the Linux executable is not supported on GPU. If I remember correctly on colab this env works...
Hi @Pimool , try this command `!mlagents-learn config.yaml --run-id=$run_id --env=$env_name --no-graphics` I guess on your server it's trying to render the environment.
Hi @M4cs, I have added a new widget (YouTube Stats Card). Check out this #11 PR.
Hi @BazilaAfridi, So in this case you can convert .pd file (TensorFlow frozen graph) to hdf5 format which is used by Keras. And regarding the project files its almost been...
Can you tell me which TensorFlow version you are using? I think this is happening because calculated losses are not updating the Actor-network. I am also facing the same issue...
I haven't applied, because I didn't found any concrete solution with GradientTape with respect to the implementation of the PPO algorithm. So currently I am using ML-Agents to training the...
Another enhancement required is to handle multiple agents in a single environment and training parallel environments.
Great to hear that it's working with gym environments. Can you tell me more about the Unity environment? For example, Is it multi-agent? or are you trying to train multiple...
Currently, this repo doesn't support the multi-agent environment. So I think this might be an issue. I will create a TODO section in README mentioning all the enhancements required for...