makemore icon indicating copy to clipboard operation
makemore copied to clipboard

Bug in build_makemore_mlp.ipynb on colab

Open MithrilMan opened this issue 1 year ago • 0 comments

First of all, thanks from a 45yo full stack dev that never stop learning and really enjoyed your videos!!

I know I'm picky but as a dev I feel the need to point out a small bug on the build_makemore_mlp.ipynb colab notebook

You moved up in a cell the

lri = []
lossi = []
stepi = []

to not lose the history of your traces, good! but... In the next cell, your stepi add the straight i from the for pool, so basically restarting back from step 0, this mean if you run the training process multiple times,the chart overwrite itself over and over. It's very visible if you just for example try to run 20 iterations )instead of 200000) multiple times.

The solution is very simple, just add last_step = stepi[-1] if stepi else 0 before the for loop, and change stepi.append(i) to stepi.append(i + last_step)

here the full revised cell to ease the update :)

last_step = stepi[-1] if stepi else 0
for i in range(20):
  
  # minibatch construct
  ix = torch.randint(0, Xtr.shape[0], (32,))
  
  # forward pass
  emb = C[Xtr[ix]] # (32, 3, 2)
  h = torch.tanh(emb.view(-1, 30) @ W1 + b1) # (32, 100)
  logits = h @ W2 + b2 # (32, 27)
  loss = F.cross_entropy(logits, Ytr[ix])
  #print(loss.item())
  
  # backward pass
  for p in parameters:
    p.grad = None
  loss.backward()
  
  # update
  #lr = lrs[i]
  lr = 0.1 if i < 100000 else 0.01
  for p in parameters:
    p.data += -lr * p.grad

  # track stats
  #lri.append(lre[i])
  stepi.append(i + last_step)
  lossi.append(loss.log10().item())

#print(loss.item())

I've to say I'm not used to colab, I don't know if I can "pull request" on it, I didn't found the colab on this report so I couldn't fix it by myself, in case there is an easier way let me know in case I find something else to fix

Thank you again for your amazing contents!

MithrilMan avatar Dec 15 '24 20:12 MithrilMan