Gridworld-with-Q-Learning-Reinforcement-Learning- icon indicating copy to clipboard operation
Gridworld-with-Q-Learning-Reinforcement-Learning- copied to clipboard

Jupyter notebook containing a solution to Sutton and Barto's gridworld problem with both a random agent and a Q-learning agent.

Gridworld Reinforcement Learning (Q-Learning)

In this exercise, you will implement the interaction of a reinforecment learning agent with its environment. We will use the gridworld environment from the second lecture. You will find a description of the environment below, along with two pieces of relevant material from the lectures: the agent-environment interface and the Q-learning algorithm.

  1. Create an agent that chooses actions randomly with this environment.

  2. Create an agent that uses Q-learning. You can use initial Q values of 0, a stochasticity parameter for the $\epsilon$-greedy policy function $\epsilon=0.05$, and a learning rate $\alpha = 0.1$. But feel free to experiment with other settings of these three parameters.

  3. Plot the mean total reward obtained by the two agents through the episodes. This is called a learning curve. Run enough episodes for the Q-learning agent to converge to a near-optimal policy.

The environment: Navigation in a gridworld

The agent has four possible actions in each state (grid square): west, north, south, and east. The actions are unreliable. They move the agent in the intended direction with probability 0.8, and with probability 0.2, they move the agent in a random other direction. It the direction of movement is blocked, the agent remains in the same grid square. The initial state of the agent is one of the five grid squares at the bottom, selected randomly. The grid squares with the gold and the bomb are terminal states. If the agent finds itself in one of these squares, the episode ends. Then a new episode begins with the agent at the initial state.

You will use a reinforcement learning algorithm to compute the best policy for finding the gold with as few steps as possible while avoiding the bomb. For this, we will use the following reward function: -1 for each navigation action, an additional +10 for finding the gold, and an additional -10 for hitting the bomb. For example, the immediate reward for transitioning into the square with the gold is -1 + 10 = +9. Do not use discounting (that is, set gamma=1).