adversarial-autoencoder
adversarial-autoencoder copied to clipboard
Semi-supervised dimensionality reduction visualization with z dimension > 2
Quoting the paper "Once the network is trained, in order to visualize the 10D learnt representation, we use a linear transformation to map the 10D representation to a 2D space such that the cluster heads are mapped to the points that are uniformly placed on a 2D circle.". With W_c = 10I we are simply adding the softmax y output to the the style parameters. I am not sure about the implementation of this linear transformation. Can you please provide some hint to implement this.
Hi. I added some code. https://github.com/musyoku/adversarial-autoencoder/tree/master/run/semi-supervised/dim_reduction_and_projection
In this case ndim_z
and ndim_y
are the same.
Cluster head
we are simply adding the softmax y output to the the style parameters.
W_c = 10I
so I implemented it as follows:
def encode_yz_representation(self, y, z):
return 10 * y + z
Additional loss
Step.1
Generate 2D cluster head:
rad = math.radians(360 / model.ndim_y)
radius = 5
mapped_cluster_head_2d_target = np.zeros((10, 2), dtype=np.float32)
for n in range(model.ndim_y):
x = math.cos(rad * n) * radius
y = math.sin(rad * n) * radius
mapped_cluster_head_2d_target[n] = (x, y)
Step.2
Optimize linear transformation:
identity = np.identity(model.ndim_y, dtype=np.float32)
mapped_head = model.linear_transformation(identity)
loss_linear_transformation = F.mean_squared_error(mapped_cluster_head_2d_target, mapped_head)
model.cleargrads()
loss_linear_transformation.backward()
optimizer_linear_transformation.update()
Result
(Training has not completed yet)
That was quite prompt. I also did something similar for my tensorflow implementation and I am using your code as a benchmark. Thanks a lot.
@zeroXzero do you have the tensorflow implementation publicly available?