janetwise

Results 2 comments of janetwise

What’s the technical approach and suggestion if I work on extending dreamer to multiple agents? Will the approach wrapping PPO to multiple agents apply in a similar way?

Maybe try to increase batch_size in the training. I used 128 as suggested in another issue and got 5%+ higher accuracy. You can try 32, 64... (default 16) based on...