HARK
HARK copied to clipboard
consider explicit representation of Model state/action spaces
@llorracc has recommended looking into bellman, a toolkit for model-based reinforcement learning (MBRL), as inspiration for HARK.
What is bellman
? It's an implementation of a kind of reinforcement learning algorithm that fits well with OpenAI's Gym library, a widely used framework for testing RL algorithms.
How might we start using these tools in HARK?
Macro problems are indeed special cases of MDP problems, though the solutions used in HARK are not currently based on RL.
We might ask: does Open AI's Gym contain any tools that can help us represent problems, in a way that is agnostic to the kind of solution?
One promising tool might be Gym's implementation of Space
, which is a superclass for the domain of state and control variables. It includes support for both discrete and continuous space.
https://github.com/openai/gym/tree/master/gym/spaces