Youtube-Code-Repository
Youtube-Code-Repository copied to clipboard
Repository for most of the code from my YouTube channel
RuntimeError: API has changed, `state_steps` argument must contain a list of singleton tensors
In the following line, the code can break if the value of 'self.max_action' is high enough that 'action' could have a high value, making the value within the logarithm negative....
Fixes discount to "start over" if an episode finishes
DQN
File "/home/../aichess/main.py", line 13, in agent = Agent( File "/home/../aichess/engines/dqn.py", line 114, in __init__ self.q_eval = DeepQNetwork(self.lr, self.n_actions, TypeError: DeepQNetwork.__init__() got multiple values for argument 'input_dims'
In "main_torch_dqn_lunar_lander_2020.py" file --> self.state_memory[index] = state It says "ValueError: setting an array element with a sequence. The requested array would exceed the maximum number of dimension of 1" When...
If you try to change the n_actions parameter then when the model trys to learn it will fail ``` 164/164 [==============================] - 0s 998us/step 164/164 [==============================] - 0s 887us/step [[[nan...
correction of small mistake in row 54: before: self.device = T.device('cuda:0' if T.cuda.is_available() else 'cuda:1') after: self.device = T.device('cuda:0' if T.cuda.is_available() else 'cpu')
On the first env.reset() call a tuple is returned of the array and a empty dict this empty dict screws up the rest of the code. is that a new...