Dohyeong Kim
Dohyeong Kim
@ndurumo254 Yes, it seems like the code location is corrected well. However, you need to change the argument name according to [document of the actual function](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.linalg.matrix_power.html). The purpose of the...
@ndurumo254 Hello, Actually, you do not need to care about that error. You should run the pytest for the test_jax_numpy_matrix_power function. How do you run the test now? See the...
@skbly7 Thank you very much, skbly7. But my submission stops again aroud first stage.  I also check a all my code by using a './utility/docker_evaluation_locally.sh'. As you know that,...
@holger-m I try to train the explore_goal_locations_small map of DMLab using default parameter of IMPALA. I can see the maximum reward reach around 200. However, it is collapsed to low...
@holger-m I find actor.py has some problem for Atari, DmLab environment. It only works well with Gfootball environment now. I can train Pong-v0 using custom actor.py file like a below...
@tkoeppe Thank you for response. I can make a random maze from text. I am wondering how can I put a component for CFG game such as the info_player_intermission, team_ctf_blueflag,...
@wezardlza I have the same issue. You need to reshape the old_logvars value after 'old_means, old_logvars = self.policy(observes)' line. You can do it by adding the below line. `old_logvars =...
@Charlulote Same question here, I can not find the patch partition part in the code. However, that part is on paper. I am not sure the Style Transfer can be...
@DLPerf Hello, sorry for late response. Thank you for sharing good idea about the tf.function. Actually, I should train the model using CPU because of constant leaking of memory. I...
@DLPerf Anyway, could you run the Dota2 environment successfully? I only tested it on my workspace. Therefore, I am not sure the environment works well also in the another workspace....