deep-learning-with-python-notebooks icon indicating copy to clipboard operation
deep-learning-with-python-notebooks copied to clipboard

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Results 126 deep-learning-with-python-notebooks issues
Sort by recently updated
recently updated
newest added

About in the middle of page 33 the text says " ... an array of 60000 matrices of 28 x **8** integers.". I believe that should be 28 x **28**...

Following along with the example in [2.1](https://jjallaire.github.io/deep-learning-with-r-notebooks/notebooks/2.1-a-first-look-at-a-neural-network.nb.html) when I run ```r network % layer_dense(units = 512, activation = "relu", input_shape = c(28 * 28)) %>% layer_dense(units = 10, activation =...

Here kl_loss formulation uses -5e-4 * K.mean(…). However, according to the essay, it should be -0.5 * K.sum(…). Here are two implementations: 1. from Keras website: https://keras.io/examples/variational_autoencoder/ from PyTorch examples:...

On this page it states about not using Max Pooling: > It isn’t conducive to learning a spatial hierarchy of features. The 3 × 3 windows in the third layer...

the code in the book: import string samples = ['The cat sat on the mat.', 'The dog ate my homework.'] characters = string.printable token_index = **dict(zip(range(1, len(characters) + 1), characters))**...

mean = train_data.mean(axis=0) train_data -= mean std = train_data.std(axis=0) train_data /= std test_data -= mean test_data /= std modify mean = train_data.mean(axis=0) std = train_data.std(axis=0) train_data -= mean train_data /=...

The same plots are used in 5.3.1 feature extraction and 5.3.2 fine tuning. I checked every points and found they are so similar. I read the book and it told...

In 8.4 notebook, there is a missing multiplication at the sampling function. The code 8-24 in the book is correct but it is not in the notebook.

This one-hot encoding method just cares about a certain word in the text or not, but not how often. Is that right?