object_detector_app icon indicating copy to clipboard operation
object_detector_app copied to clipboard

data_flow_ops.py, line 91, in _as_name_list raise ValueError when run with Python3.5 but everything run well with Python2.7

Open itxiud2015 opened this issue 7 years ago • 1 comments

When I run train.py with model faster_rcnn_resnet101 by Python2.7 interpreter on my customized data set, it work well, however when I run the same code in the same context and the same data set with Python3.5 interpreter, it report below error: tf_35-train-fail-2 I am running with below setting: GeForce GTX 1070 Ubuntu 16.04.2 tensorflow 1.4.1 and below is my config: model { faster_rcnn { num_classes: 1 image_resizer { fixed_shape_resizer { height: 400 width: 400 } } feature_extractor { type: 'faster_rcnn_resnet101' first_stage_features_stride: 16 } first_stage_anchor_generator { grid_anchor_generator { scales: [0.25, 0.5, 1.0, 2.0] aspect_ratios: [0.5, 1.0, 2.0] height_stride: 16 width_stride: 16 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.00004 } } initializer { truncated_normal_initializer { stddev: 0.01 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.7 first_stage_max_proposals: 300 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_box_predictor { mask_rcnn_box_predictor { use_dropout: false dropout_keep_probability: 1.0 fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0002 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.0 iou_threshold: 0.6 max_detections_per_class: 100 max_total_detections: 300 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } }

train_config: { batch_size: 1 optimizer { momentum_optimizer: { learning_rate: { manual_step_learning_rate { initial_learning_rate: 0.0001 schedule { step: 0 learning_rate: .0001 } schedule { step: 5000 learning_rate: .00001 } schedule { step: 7000 learning_rate: .000001 } } } momentum_optimizer_value: 0.9 } use_moving_average: false } gradient_clipping_by_norm: 10.0 batch_queue_capacity: 2 prefetch_queue_capacity: 2 fine_tune_checkpoint: "models/model.ckpt" from_detection_checkpoint: true num_steps: 2000 }

train_input_reader: { tf_record_input_reader { input_path: "data/train.record" } label_map_path: "data/label_map.pbtxt" }

eval_config: { num_examples: 272 num_visualizations: 272 }

eval_input_reader: { tf_record_input_reader { input_path: "data/test.record" } label_map_path: "data/label_map.pbtxt" shuffle: true }

@datitran Can you help over this?

itxiud2015 avatar Aug 18 '18 04:08 itxiud2015

did you solve this?

YuanYunjing avatar May 05 '19 07:05 YuanYunjing