Isaac Kargar
Isaac Kargar
Hi, I'm trying to use another scenario from scenario runner with carla_ad_demo. I put the scenario file `LaneChangeSimple.xosc` in the config folder next to the `FollowLeadingVehicle.xosc` file and change both...
Hello, Is it possible to send string data or key-value (like json) data from the board to helium network? I see in the library the `uint8_t appData[LORAWAN_APP_DATA_MAX_SIZE];` and I tried...
Hello, Is it possible to use this library with a custom board with esp32 chips? Thank you
**Is your feature request related to a problem? Please describe.** Model-based offline RL algorithms which are able to handle image inputs are necessary for some environments. **Describe the solution you'd...
### Describe the Issue I want to run a trained tensorflow model on ARM cortex m3 32 bit. Is it possible? how can I do that? ### Steps to Reproduce...
Hi, Thank you for releasing the code. I have some questions about the 'done' situation in the cooperative navigation environment. I don't see any done function for the env. I...
Hi, Is there any subset of data? Or a version with a lower volume? it's more than 40 GB?
Hi, Is it possible to run the simulator faster for training the RL agent? It is too slow in real time
- Add dqn for non-frame state spaces - Add loading and evaluation code for the trained model
I'm trying to use PPO_LSTM and R2D1 with a multi-agent environment. I was checking the other related [issue](https://github.com/astooke/rlpyt/issues/14) but it seems it was more about DDPG and not recurrent models....