bsuite icon indicating copy to clipboard operation
bsuite copied to clipboard

DQN mnist & mountain car performance

Open pluebcke opened this issue 4 years ago • 16 comments

Hi,

while working on a PyTorch DQN agent for BSuite experiments, I noticed quite bad results on the mnist and mountain car experiments. I see that a similar question was addressed here, but the thread was closed.

To further investigate, I created a new conda environment, downloaded and installed a fresh copy of BSuite and ran the DQN agent from the baselines. The only settings I've changed were "bsuite_id" to "SWEEP" and the save path.

When you compare the results from both agents with the barplot on page 16 of the BSuite manuscript, you notice that both agents have worse performance on mnist and mountaincar and better performance on catch. barplot

Were there any changes on the environments that I missed? The DQN agent from the manuscript did use the default parameters from the baseline directory, correct?

Thanks, Peter

pluebcke avatar Mar 25 '20 19:03 pluebcke

Hi Peter!

Thanks for raising this... I think we might have seen some slippage in agent performance.

I'm not sure if this has come from updates to:

  • Tensorflow 1 --> 2
  • Different parameter settings
  • Changes to random seed

My suspicion is there are some small details in this migration TF1->TF2 that changed some scores (the agents aren't exactly the same). We will look into this and then re-run baselines with updated numbers.

Many thanks, Ian

iosband avatar Apr 01 '20 14:04 iosband

Hello again!

I have just run the agents checked in at HEAD and I did not see your observed scoring...

image

We may need to tool in some more continuous testing, but the scores on mnist in particular seem "off" for the DQN implementation.

Can you confirm this is still an issue for you? Is this poor performance for your implementation of DQN, or the baseline implementation we provide?

iosband avatar Aug 10 '20 15:08 iosband

I also have a similar observation concerning MountainCar, however it is related to the AC algorithm. To me it seems like there is a major difference between the results reported in the paper (close to 1) versus the ones in this thread (close to 0). I have also tried running actor_critic_rnn on mountain_car and it does not seem to learn (on default hyper params).

mklissa avatar Aug 17 '20 18:08 mklissa

Yes @mklissa - I see that difference above.

There have been several moving pieces:

  • Changes to the bsuite environment (at release MountainCar had gravity the wrong direction)
  • Changes to agent (migrate from TF1 -> TF2/JAX)

However, I think the best approach is to go from what is at HEAD and start a new issue to update paper/reference colabs to incorporate this bug fix.

iosband avatar Aug 20 '20 09:08 iosband

Dear Ian,

Thanks for looking into this! Back in march, I observed poor performance on mnist with both the baseline implementation as well as my own implementation of DQN.

Given that mnist seems to work perfectly fine for you, I assume there must be some problem on my side. I will set up a system from scratch and run the baseline implementation of dqn again. It might take a while though until I find time to do that.

Best regards,

Peter

pluebcke avatar Aug 30 '20 18:08 pluebcke

Hi, I finally found some time to look into the issue. On my laptop, the performance of the baselines tensorflow DQN agent is still quite bad (with a score somewhere around 0.25 in the bar plots above).

I used a fresh install of pop!_os 20.04 (distribution based on Ubuntu) and then performed as few steps as possible to run the agent:

  1. Installed Anaconda
  2. Created a Conda environment with Python 3.7
  3. Steps from the bsuite github page:
  • pip install --upgrade pip setuptools
  • pip install bsuite
  • pip install bsuite[baselines]
  1. Opened bsuite/bsuite/baselines/tf/dqn/run.py, changed the bsuite_id to 'SWEEP' and set the 'verbose' flag to False
  2. python run.py

Hope this helps.

Best regards, Peter

pluebcke avatar Sep 29 '20 12:09 pluebcke

Ah... OK well I think in order to get the claimed performance, you need to run the dqn.default_agent()

I can see that this is a bit confusing, but we wanted to expose the flags as an easy way for people to tinker! If you instead fo to baselines/tf/run.py then you should be able to get the same behaviour... is that right?

BTW... do you think we should instead remove the flag options and avoid this kind of confusion?

iosband avatar Sep 29 '20 12:09 iosband

I would keep the flag options but maybe have the same values as the default agent as default.

As you suggested, I replaced the agent (that uses the flags) with the dqn.default_agent() in run.py and ran the experiments again. Unfortunately no improvement on the mnist experiments.

@mklissa you said you observed something similar on MountainCar. Did the mnist experiments work for you? If I'm the only one experiencing this problem, then there might just be some issue on my side.

Best regards, Peter

pluebcke avatar Oct 10 '20 12:10 pluebcke

Hey there, I'm writing to report that I'm also experiencing the same problem as @pluebcke in MNIST. I couldn't replicate the good MNIST results as reported in the paper. I also noticed a bad performance (0.2-0.26 score at most) using PPO and DQN agents from an external library (stable-baselines), tried different hyper-parameters, number of layers/neurons, activation functions, with no effect. I also checked the MNIST env implementation offered here and seemed OK to me.

Today I created a new virtual env with the latest BSuite version with the baselines, runned the 20 seeds twice and the baseline DQN agent also scored 0.23. This also happens with noise and scale variants.

jbarsce avatar Nov 18 '20 21:11 jbarsce

Hi @jbarsce - I'm not sure I understand the question.

So, are you saying that: (a) The TF DQN checked into bsuite.baselines is not solving the bandit task for you? (b) Another agent is unable to solve the MNIST task?

We have some tools for testing this internally within Google/DeepMind... and based on that I'm confident that the bsuite/baselines/jax/dqn and bsuite/baselines/tf/dqn do reproduce the performance.

However... we clearly need to work out a way to share these tests/reproducibility/installation instructions so that this confusion does not arise.

iosband avatar Nov 19 '20 10:11 iosband

image

For reference, here is a record of the nightly runs for the TF baselines. You can see that some of the experiments are a little noisy... but that the DQN TF is consistently reproducing the MNIST results abov.

iosband avatar Nov 19 '20 12:11 iosband

Hi Ian, thanks for the quick reply! yes, I ran the BSuite experiments with another DQN agent and noticed that, while the other envs performed similar than in the accompanying paper, MNIST was the only that underperformed.

As this external agent had several variations, I tried to replicate the results with the DQN agent from this repo, trying tf and jax and isolating them in a new virtual environment. In case they are of any help, the following were the steps I followed (I took them from the jax repo and from here)

  1. Created a conda environment with python==3.6
  2. pip install --upgrade pip setuptools
  3. pip install bsuite[baselines]

For tensorflow 2.1

I ran the experiments with

  1. python bsuite/bsuite/baselines/tf/dqn/run.py --bsuite_id=MNIST

For jax

  1. pip install git+https://github.com/deepmind/dm-haiku
  2. pip install --upgrade jax jaxlib
  3. pip install git+git://github.com/deepmind/optax.git
  4. pip install git+git://github.com/deepmind/rlax.git
  5. python bsuite/bsuite/baselines/jax/dqn/run.py --bsuite_id=MNIST

Environment: Ubuntu 18.04 bionic

Please let me know if you need any other information. Finally, thanks for this great repository Juan

jbarsce avatar Nov 19 '20 20:11 jbarsce

Just a wild guess, maybe something went wrong with the download of the input mnist dataset for Juan and me?

pluebcke avatar Nov 20 '20 11:11 pluebcke

Yes interesting... something is getting lost between the version that is checked in to Google3 and the settings you are running.

@yotam @aslanides and I will have a look into this...

Going to keep this open for now and try to reproduce this...

iosband avatar Nov 20 '20 15:11 iosband

Repro in ~10 lines (excluding imports): https://colab.research.google.com/drive/1XtTv-p2bXfvMBT_77cWjWRHPXIvimWlO?usp=sharing

braham-snyder avatar Aug 14 '22 13:08 braham-snyder