PaDiM-Anomaly-Detection-Localization-master
PaDiM-Anomaly-Detection-Localization-master copied to clipboard
It also looks good to save conv_inv.
main.py
origin code : for i in range(H * W): # cov[:, :, i] = LedoitWolf().fit(embedding_vectors[:, :, i].numpy()).covariance_ cov[:, :, i] = np.cov(embedding_vectors[:, :, i].numpy(), rowvar=False) + 0.01 * I # save learned distribution train_outputs = [mean, cov]
change to : for i in range(H * W): # cov[:, :, i] = LedoitWolf().fit(embedding_vectors[:, :, i].numpy()).covariance_ cov[:, :, i] = np.cov(embedding_vectors[:, :, i].numpy(), rowvar=False) + 0.01 * I # save learned distribution conv_inv = np.linalg.inv(cov.T).T train_outputs = [mean, cov, conv_inv]
how to using :
dist_list = [] for i in range(H * W): mean = train_outputs[0][:, i] #conv_inv = np.linalg.inv(train_outputs[1][:, :, i]) dist = [mahalanobis(sample[:, i], mean, train_outputs[2][:, :, i]) for sample in embedding_vectors] dist_list.append(dist)
in my opinion
you do not need to save cov as well
//save only train_outputs = [mean, conv_inv]