intel-extension-for-pytorch icon indicating copy to clipboard operation
intel-extension-for-pytorch copied to clipboard

Using XPU training actually increases the training time and sharply reduces the accuracy.

Open SoldierWz opened this issue 11 months ago • 9 comments

Describe the issue

When I wanted to try using a graphics card to train my classification model I made changes to the following code device = 'xpu' X_tensor = torch.tensor(X_processed, dtype=torch.float).to(device) y_tensor = torch.tensor(y, dtype=torch.long).to(device) model = FCN(X_train.shape[1], len(np.unique(y))).to(device) model, optimizer = ipex.optimize(model, optimizer=optimizer) features, labels = features.to(device), labels.to(device) X_test = X_test.to(device) What I am doing is a prediction and classification task on a small data set. The sample size is only a few hundred.I know this task is not suitable for running on GPU but I just tried it. I am using A770 graphics card and the processor is 12400. I have successfully installed all the necessary according to the tutorial. My training time using CPU was 24 seconds with an accuracy of 0.94 But when I sent all the data to the XPU, the same training took 1 minute and 40 seconds and the accuracy was only 0.34 This is not an important issue, but I still want to report this unusual phenomenon.

SoldierWz avatar Mar 23 '24 08:03 SoldierWz