data-science-from-scratch
data-science-from-scratch copied to clipboard
code for Data Science From Scratch book
Updated to fully correspond to the exercises in the book with: (1) Deleted user[10] because the data about him is not provided and he's not listed in the users list...
If the value `p = 1 (100%)` is chosen, `sorted(x)[p_index]` is out of bounds because `p_index = p * len(x) = 1 * len(x) = len(x)` and the list `x`...
Corrected `most_common_interests_with(user)` method definition is exactly the same as in the book (page8). The same result as the proposed correction would give a method which takes `user_id = user["id"]`argument instead...
While running the code below by following the example in Chapter 19 Clustering, ``` random.seed(0) # so you get the same results as me clusterer = KMeans(3) clusterer.train(inputs) print clusterer.means...
A wee nonfunctional problem: you reuse the same four params for `precision()`, `recall()` and `accuracy()` in [machine_learning.py](https://github.com/joelgrus/data-science-from-scratch/blob/master/code-python3/machine_learning.py). Should be: ``` def precision(tp, fp): return tp / (tp + fp) ```...
I'm new with neuronal-networks. In your example of backpropagation, you update the weights, which are pointing from the hidden to the output layer, immediatly after calculating the output deltas. After...
I'm not 100% sure I had all of my code in correctly by this point, but I got an overflow error when I tried to fit the logistic model example...