Youtube-Code-Repository icon indicating copy to clipboard operation
Youtube-Code-Repository copied to clipboard

Repository for most of the code from my YouTube channel

Results 48 Youtube-Code-Repository issues
Sort by recently updated
recently updated
newest added

Traceback (most recent call last): File "main.py", line 42, in agent.learn(observation, action, reward, observation_, done) TypeError: learn() takes 5 positional arguments but 6 were given change: def learn(self, state, reward,...

Hello @philtabor , When you attempt to use experience replay in actor critic setting, to me it looks that only critic part is trained (gradients propagated), but the actor part...

Hi I was trying to save model (lunar lander youtube tutorial) but I'm not able to I tried adding agent.save_model() in file main_tf2_dqn_lunar_lander.py but then it gives an error as...

why the parameter input_dim add if tit is not used inside the function?where is the input shape layer?

Due to the custom_loss it is having error of expected array when we load the trained model. Please anyone can help revert back asap.

Before hand, Thanks you for all the knowledge that you have shared, if it was not for your videos I could not understand Q Learning. https://github.com/philtabor/Youtube-Code-Repository/blob/3fd7b0248e3e81a75d889a80ed2bf7f710334b12/ReinforcementLearning/DeepQLearning/dueling_dqn_keras.py#L120 I tested with my...

Hi Phil, huge fan of your work.. I have two questionsn regarding policy gradients TensorFlow for SpaceInvaders: 1.In the reinforce_cnn_tf.py and in the choose_action function there is a line: probabilities...

Hi, tried your code as an example (reinforce_keras.py ) on "pong-v0" and model is not learning, I think something is wrong in code

Example crashes on drawing plot. There are not enough parameters