enas icon indicating copy to clipboard operation
enas copied to clipboard

use cifar10_macro_final code is running error

Open weidong8405347 opened this issue 6 years ago • 3 comments

when fix_arc count is larger than 3, File "src/cifar10/main.py", line 361, in tf.app.run() File "/data1/winterhuang/huangweidong/tools/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run _sys.exit(main(argv)) File "src/cifar10/main.py", line 357, in main train() File "src/cifar10/main.py", line 225, in train ops = get_ops(images, labels) File "src/cifar10/main.py", line 190, in get_ops child_model.connect_controller(None) File "/data1/winterhuang/automl/enas/src/cifar10/general_child.py", line 708, in connect_controller self._build_train() File "/data1/winterhuang/automl/enas/src/cifar10/general_child.py", line 598, in _build_train logits = self._model(self.x_train, is_training=True) File "/data1/winterhuang/automl/enas/src/cifar10/general_child.py", line 212, in _model x = self._fixed_layer(layer_id, layers, start_idx, out_filters, is_training) File "/data1/winterhuang/automl/enas/src/cifar10/general_child.py", line 468, in _fixed_layer prev = res_layers + [out] UnboundLocalError: local variable 'out' referenced before assignment

weidong8405347 avatar May 15 '18 14:05 weidong8405347

it didn't implement max pooling and average pooling. you can just implement it and the issue solved

hsl0529 avatar May 21 '18 06:05 hsl0529

If I understand correctly (and it seems to work in practice) you should modify general_child.py in the section of the _fixed_layer() function:

    if self.whole_channels:
      if self.data_format == "NHWC":
        inp_c = inputs.get_shape()[3].value
        actual_data_format = "channels_last"
      elif self.data_format == "NCHW":
        inp_c = inputs.get_shape()[1].value
        actual_data_format = "channels_first"

      count = self.sample_arc[start_idx]
      if count in [0, 1, 2, 3]:
        size = [3, 3, 5, 5]
        filter_size = size[count]
        with tf.variable_scope("conv_1x1"):
          w = create_weight("w", [1, 1, inp_c, out_filters])
          out = tf.nn.relu(inputs)
          out = tf.nn.conv2d(out, w, [1, 1, 1, 1], "SAME",
                             data_format=self.data_format)
          out = batch_norm(out, is_training, data_format=self.data_format)

        with tf.variable_scope("conv_{0}x{0}".format(filter_size)):
          w = create_weight("w", [filter_size, filter_size, out_filters, out_filters])
          out = tf.nn.relu(out)
          out = tf.nn.conv2d(out, w, [1, 1, 1, 1], "SAME",
                             data_format=self.data_format)
          out = batch_norm(out, is_training, data_format=self.data_format)
      elif count == 4:
        with tf.variable_scope("pool"):
          out = tf.layers.average_pooling2d(
            inputs, [3, 3], [1, 1], "SAME", data_format=actual_data_format)
      elif count == 5:
        with tf.variable_scope("pool"):
          out = tf.layers.max_pooling2d(
            inputs, [3, 3], [1, 1], "SAME", data_format=actual_data_format)
      else:
        raise ValueError("Unknown operation number '{0}'".format(count))
    else:
        .......

Including the pooling in branches count == 4 and count == 5 should fix the pooling issues in conducting a final search

MrtnMndt avatar Jun 20 '18 09:06 MrtnMndt

@MrtnMndt , Thank yo for your question . I also met the same error like you.Did you fixed it by the mentioned code?

axiniu avatar Jun 24 '18 09:06 axiniu