adversarial-autoencoder
adversarial-autoencoder copied to clipboard
A question about cluster heads cost
Hello! Recently, I am studying one of your code about adversarial-autoencoder. I don't know the way that you define the starting labels and ending labels which appears in aae_dim_reduction.py. Could you tell me why you define them like that?
Hi.
Naive implementation of computing the Euclidean distance between every two cluster heads will be as follows.
But there are duplicate values.
Required values will be as follows, and starting_labels
and ending_labels
are defined as
:smile:
I try to the code , but i find the culster head loss doesn't help the classfication accuracy?? and the last picture you post seems that the same digits are sperated different clusters, i guess ,the accuracy is not good ,so , i think , maybe there is something wrong?
I think that the cluster head loss does not contribute to improving classfication accuracy.
It is used to increase the distance between clusters.
In unsupervised learning, accuracy is not good.
I'm not sure if it is a bug.
but according to the paper, the classification error has reached 4.2% and 6.08% with 1000 and 100 labels respectively, My test result is 10% with 1000 lables ,just like the picture below . and for 100 labels , just a mess. It's not seem good !!
A fixed rotated transform matrix has been tried, the result is also not ideal.
but the classification error is not as good as the paper , so why??
I think that it is due to the difference in implementation. I don't know how the authors implemented it, so it is difficult to reproduce perfectly