deffrent y_pred calculating in leasson 02(pytorch cllassification) and extras in pytorch learning.why?!
in make blobs use this to calculate test_pred: test_logits = model_4(X_blob_test) test_pred = torch.softmax(test_logits, dim=1).argmax(dim=1) but in extras exercises should use this code: test_logits = model_0(X_test).squeeze() test_pred = torch.round(torch.sigmoid(test_logits))
i tried to use the first solution for extras but after using this code to solve erors: y_pred_probs_test=torch.sigmoid(y_logits_test) y_pred_test2=torch.softmax((y_logits_test).unsqueeze(1),dim=1).argmax(dim=1)
i get accuracy test: 0.5 while my loss is near to 0.
i cant understand when should use torch.round and when should use .argmax(dim=1) for converting test_logits to test_pred
Hi @jenabesaman ,
It will depend on what kind of data you're using for torch.softmax or torch.sigmoid.
Do you have a more full example of your code so I can help?
It would be great to share a full example of each of your problems.