Grokking-Deep-Learning icon indicating copy to clipboard operation
Grokking-Deep-Learning copied to clipboard

this repository accompanies the book "Grokking Deep Learning"

Results 51 Grokking-Deep-Learning issues
Sort by recently updated
recently updated
newest added

In "Chapter 5: Gradient Descent Learning with Multiple Inputs" the variable "weight_deltas" is missing as reported in issue #25 and #3, which has been added.

In the **Homomorphically encrypted federated learning** section The providing code are as follows: ``` 1. model = Embedding(vocab_size=len(vocab), dim=1) 2. model.weight.data *= 0 3. 4. # note that in production...

`bs` is not defined and should probably be `batch_size` ```python def train(model, input_data, target_data, batch_size=500, iterations=5): criterion = MSELoss() optim = SGD(parameters=model.get_parameters(), alpha=0.01) n_batches = int(len(input_data) / batch_size) for iter...

1. `bs` variable probably should be `batch_size` 2. copy package is used before `import copy`

8x8 image with a 3x3 kernel, we get 6x6 output, which means 8-3+1=6. when using `for row_start in range(layer_0.shape[1]-kernel_rows)` , it discards the last pixel in the row. What do...

In Chapter 8, the value of alpha in the dropout example is 0.005. In the batch gradient descent example, the text says that alpha is 20 times larger than before....

…pynb Add scope of `min_loss` variable to update it from the previous cell. Currently, returning a UnboundLocalError.

error in Gradient Descent Learning with Multiple Inputs not using weight_deltas=ele_mul(delta,input) ``` using for i in range(len(weights)): weight_deltas=ele_mul(delta,input) weights[i] -= alpha * weight_deltas[i]```

the first example of embeddeds layer has no newlines because it thinks its rawtext (p.s. book is awesome)

`weight_deltas` are calculated in this way: ``` [ [input[0] * delta[0], input[0] * delta[1], input[0] * delta[2]], [input[1] * delta[0], input[1] * delta[1], input[1] * delta[2]], [input[2] * delta[0], input[2]...