ISTA-NAS icon indicating copy to clipboard operation
ISTA-NAS copied to clipboard

What does it mean that the weight of an operation is a negative value?

Open XinleWu opened this issue 4 years ago • 4 comments

Hi, I have read your paper on NeruaIPS2020, and I like it. I just have a question about the weight of operations. It usually contains negative values in the training process, what does it mean when the weight of an operation is negative? And the non-zero values in the weight matrix are not close to 1, so is there still a gap between the sub-graph and the final genotype?

XinleWu avatar Jan 19 '21 15:01 XinleWu

Hi, thanks for your attention. I think the "weight of operations" in your sense refers to the z values. Here we use the magnitude of z value to represent the importance of its corresponding connection, regardless of its sign. The feature output from each connection also could contain negative values, so restricting the weight positive does not make sense. Our goal is to optimize the choice of two connections to combine for each intermediate node. The weight values (not close to 1) indeed contribute to the optimized model. But I think the weights as scalar coefficients could be easily learned by retraining. Besides, in our one-stage ISTA-NAS with no need of retraining, the weights are coupled into the optimized model, which eliminates the gap you are concerned about.

iboing avatar Jan 24 '21 04:01 iboing

Thanks for your reply. Actually, the "weight of operations" I said refers to b^T * A(S) - z(S)^T *E(S,S), where the weight values can be negative and not close to 1. And I'm confused why we can evaluate the model searched by one-stage ISTA-NAS independently, because it changes the above weight matrix to a 'multi-hot' matrix. Besides, I can't reproduce the cifar10 result under your guidance, I used the following command: nohup python3 -u ./tools/train_search_single.py --cutout --auxiliary > ista.log 2>&1 & and the accuracy is 96.56%. Thanks again.

XinleWu avatar Jan 24 '21 11:01 XinleWu

This is my training log. ista.log I'm trying other random seeds.

XinleWu avatar Jan 24 '21 14:01 XinleWu

Hi, what you refer to is equal to z values (Eq. 11). Here we relax z from binary space to real space. We use the magnitude of z to indicate the importance of connections, with no regard to whether it is close to 1 or not. The weight value itself is acutally absorbed into the parameter learning of its corresponding connection in retraining. The search result could be quite diverse. So I suggest using the evaluation code to reproduce the performance of our architectures. The following files are our evaluation logs. One stage Imgnet.log One stage Imgnet resume.log One stage cifar.log

iboing avatar Feb 09 '21 14:02 iboing