NASLib icon indicating copy to clipboard operation
NASLib copied to clipboard

Source of validation accuracy in zero-cost case

Open jr2021 opened this issue 2 years ago • 2 comments

In the zero-cost branch optimizers Npenas and Bananas, the validation accuracy of architectures is being queried from the zero-cost-benchmark as follows:

model.accuracy = self.zc_api[str(model.arch_hash)]['val_accuracy']

The question is, whether this supports the case where the user wants to use the ZeroCost predictor because their dataset or search space is not supported by the zero-cost benchmark.

If this is a case that we want to support, one option would be to introduce a parameter use_zc_api and use it as follows:

if self.use_zc_api:
    model.accuracy = self.zc_api[str(model.arch_hash)]['val_accuracy']
else:
    model.accuracy = model.arch.query(
        self.performance_metric, self.dataset, dataset_api=self.dataset_api
    )

jr2021 avatar Sep 17 '22 13:09 jr2021

The code was written this way for the Zero-Cost NAS paper, where we consumed only search spaces for which the values were available in the zc_api. It would make more sense to give users the option to choose whether or not to query the zc_api, as you suggest.

Neonkraft avatar Sep 21 '22 10:09 Neonkraft

Got it. Another sub-issue that came up is when to call query_zc_scores. The question is whether this function only be called under the following condition:

if self.zc and len(self.train_data) <= self.max_zerocost:
    ...

Or, is there a case where the zero-cost scores can be calculated after the self.max_zerocost parameter has been exceeded? We assume that this parameter refers to the maximum number of zero cost evaluations, so presumably the answer is no. What do you think?

jr2021 avatar Sep 21 '22 15:09 jr2021