PaddleSeg icon indicating copy to clipboard operation
PaddleSeg copied to clipboard

[Bug] ESPNet网络训练TypeError

Open chen-del opened this issue 1 year ago • 4 comments

espnet网络训练过程中出现了 TypeError: 'tuple' object does not support item assignment。

日志如下: 022-07-27 11:27:25 [INFO] ------------Environment Information------------- platform: Linux-5.4.0-121-generic-x86_64-with-debian-buster-sid Python: 3.6.13 |Anaconda, Inc.| (default, Jun 4 2021, 14:25:59) [GCC 7.5.0] Paddle compiled with cuda: True NVCC: Build cuda_11.4.r11.4/compiler.30188945_0 cudnn: 8.2 GPUs used: 1 CUDA_VISIBLE_DEVICES: 1 GPU: ['GPU 0: NVIDIA GeForce', 'GPU 1: NVIDIA GeForce'] GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PaddleSeg: 2.6.0 PaddlePaddle: 2.2.2 OpenCV: 4.4.0

2022-07-27 11:27:26 [INFO] ---------------Config Information--------------- batch_size: 8 iters: 120000 loss: coef:

  • 1
  • 1 types:
  • ignore_index: 255 type: CrossEntropyLoss weight:
    • 2.79834108
    • 6.92945723
    • 3.84068512 lr_scheduler: end_lr: 0.0 learning_rate: 0.001 power: 0.9 type: PolynomialDecay model: drop_prob: 0.0 in_channels: 3 num_classes: 3 scale: 2.0 type: ESPNetV2 optimizer: type: adam weight_decay: 0.0002 train_dataset: dataset_root: /home/dell/SD_4G/chenchao/phone_cig/data mode: train num_classes: 3 train_path: /home/dell/SD_4G/chenchao/phone_cig/data/train.txt transforms:
  • max_scale_factor: 2.0 min_scale_factor: 0.5 scale_step_size: 0.25 type: ResizeStepScaling
  • target_size:
    • 128
    • 128 type: Resize
  • crop_size:
    • 112
    • 112 type: RandomPaddingCrop
  • type: RandomHorizontalFlip
  • brightness_range: 0.5 contrast_range: 0.5 saturation_range: 0.5 type: RandomDistort
  • type: Normalize type: Dataset val_dataset: dataset_root: /home/dell/SD_4G/chenchao/phone_cig/data mode: val num_classes: 3 transforms:
  • target_size:
    • 128
    • 128 type: Resize
  • crop_size:
    • 112
    • 112 type: RandomPaddingCrop
  • type: Normalize type: Dataset val_path: /home/dell/SD_4G/chenchao/phone_cig/data/val.txt

W0727 11:27:26.441375 25309 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 11.4, Runtime API Version: 11.1 W0727 11:27:26.441407 25309 device_context.cc:465] device: 0, cuDNN Version: 8.2. /home/dell/anaconda3/envs/d2l/lib/python3.6/site-packages/paddle/nn/layer/norm.py:653: UserWarning: When training, we now always track global mean and variance. "When training, we now always track global mean and variance.") /home/dell/anaconda3/envs/d2l/lib/python3.6/site-packages/paddle/fluid/dygraph/math_op_patch.py:253: UserWarning: The dtype of left and right variables are not the same, left dtype is paddle.int64, but right dtype is paddle.float32, the right dtype will convert to paddle.int64 format(lhs_dtype, rhs_dtype, lhs_dtype)) 2022-07-27 11:27:33 [INFO] [TRAIN] epoch: 1, iter: 10/120000, loss: 1.9978, lr: 0.001000, batch_cost: 0.4619, reader_cost: 0.13041, ips: 17.3209 samples/sec | ETA 15:23:39 2022-07-27 11:27:36 [INFO] [TRAIN] epoch: 1, iter: 20/120000, loss: 1.6989, lr: 0.001000, batch_cost: 0.3299, reader_cost: 0.21948, ips: 24.2496 samples/sec | ETA 10:59:41 2022-07-27 11:27:39 [INFO] [TRAIN] epoch: 1, iter: 30/120000, loss: 1.5175, lr: 0.001000, batch_cost: 0.3167, reader_cost: 0.20147, ips: 25.2578 samples/sec | ETA 10:33:18 2022-07-27 11:27:42 [INFO] [TRAIN] epoch: 1, iter: 40/120000, loss: 1.3909, lr: 0.001000, batch_cost: 0.3059, reader_cost: 0.18852, ips: 26.1516 samples/sec | ETA 10:11:36 2022-07-27 11:27:45 [INFO] [TRAIN] epoch: 1, iter: 50/120000, loss: 1.2773, lr: 0.001000, batch_cost: 0.2652, reader_cost: 0.15381, ips: 30.1654 samples/sec | ETA 08:50:11 2022-07-27 11:27:47 [INFO] [TRAIN] epoch: 1, iter: 60/120000, loss: 1.2046, lr: 0.001000, batch_cost: 0.2424, reader_cost: 0.12201, ips: 33.0078 samples/sec | ETA 08:04:29 2022-07-27 11:27:50 [INFO] [TRAIN] epoch: 1, iter: 70/120000, loss: 1.1381, lr: 0.000999, batch_cost: 0.2579, reader_cost: 0.13377, ips: 31.0155 samples/sec | ETA 08:35:34 2022-07-27 11:27:53 [INFO] [TRAIN] epoch: 1, iter: 80/120000, loss: 1.0860, lr: 0.000999, batch_cost: 0.2717, reader_cost: 0.14180, ips: 29.4465 samples/sec | ETA 09:02:59 2022-07-27 11:27:56 [INFO] [TRAIN] epoch: 1, iter: 90/120000, loss: 1.0422, lr: 0.000999, batch_cost: 0.2999, reader_cost: 0.18189, ips: 26.6746 samples/sec | ETA 09:59:22 2022-07-27 11:27:59 [INFO] [TRAIN] epoch: 1, iter: 100/120000, loss: 1.0120, lr: 0.000999, batch_cost: 0.2919, reader_cost: 0.18000, ips: 27.4098 samples/sec | ETA 09:43:14 2022-07-27 11:28:01 [INFO] [TRAIN] epoch: 1, iter: 110/120000, loss: 0.9755, lr: 0.000999, batch_cost: 0.2679, reader_cost: 0.14435, ips: 29.8661 samples/sec | ETA 08:55:14 2022-07-27 11:28:04 [INFO] [TRAIN] epoch: 1, iter: 120/120000, loss: 0.9384, lr: 0.000999, batch_cost: 0.2596, reader_cost: 0.14208, ips: 30.8130 samples/sec | ETA 08:38:44 2022-07-27 11:28:07 [INFO] [TRAIN] epoch: 1, iter: 130/120000, loss: 0.9306, lr: 0.000999, batch_cost: 0.2869, reader_cost: 0.17596, ips: 27.8798 samples/sec | ETA 09:33:16 2022-07-27 11:28:09 [INFO] [TRAIN] epoch: 1, iter: 140/120000, loss: 0.8998, lr: 0.000999, batch_cost: 0.2678, reader_cost: 0.15102, ips: 29.8720 samples/sec | ETA 08:54:59 2022-07-27 11:28:12 [INFO] [TRAIN] epoch: 1, iter: 150/120000, loss: 0.8789, lr: 0.000999, batch_cost: 0.2585, reader_cost: 0.14370, ips: 30.9492 samples/sec | ETA 08:36:19 2022-07-27 11:28:15 [INFO] [TRAIN] epoch: 1, iter: 160/120000, loss: 0.8569, lr: 0.000999, batch_cost: 0.2450, reader_cost: 0.12616, ips: 32.6586 samples/sec | ETA 08:09:15 2022-07-27 11:28:18 [INFO] [TRAIN] epoch: 1, iter: 170/120000, loss: 0.8440, lr: 0.000999, batch_cost: 0.3029, reader_cost: 0.18236, ips: 26.4129 samples/sec | ETA 10:04:54 2022-07-27 11:28:21 [INFO] [TRAIN] epoch: 1, iter: 180/120000, loss: 0.8351, lr: 0.000999, batch_cost: 0.3172, reader_cost: 0.20083, ips: 25.2170 samples/sec | ETA 10:33:32 2022-07-27 11:28:23 [INFO] [TRAIN] epoch: 1, iter: 190/120000, loss: 0.8170, lr: 0.000999, batch_cost: 0.2556, reader_cost: 0.13726, ips: 31.3037 samples/sec | ETA 08:30:18 2022-07-27 11:28:26 [INFO] [TRAIN] epoch: 1, iter: 200/120000, loss: 0.8019, lr: 0.000999, batch_cost: 0.2515, reader_cost: 0.12918, ips: 31.8037 samples/sec | ETA 08:22:14 2022-07-27 11:28:29 [INFO] [TRAIN] epoch: 1, iter: 210/120000, loss: 0.7938, lr: 0.000998, batch_cost: 0.2842, reader_cost: 0.17311, ips: 28.1487 samples/sec | ETA 09:27:24 2022-07-27 11:28:32 [INFO] [TRAIN] epoch: 1, iter: 220/120000, loss: 0.7767, lr: 0.000998, batch_cost: 0.3028, reader_cost: 0.17503, ips: 26.4217 samples/sec | ETA 10:04:27 2022-07-27 11:28:35 [INFO] [TRAIN] epoch: 1, iter: 230/120000, loss: 0.7712, lr: 0.000998, batch_cost: 0.2856, reader_cost: 0.16743, ips: 28.0102 samples/sec | ETA 09:30:07 2022-07-27 11:28:37 [INFO] [TRAIN] epoch: 1, iter: 240/120000, loss: 0.7591, lr: 0.000998, batch_cost: 0.2826, reader_cost: 0.16680, ips: 28.3131 samples/sec | ETA 09:23:58 2022-07-27 11:28:40 [INFO] [TRAIN] epoch: 1, iter: 250/120000, loss: 0.7502, lr: 0.000998, batch_cost: 0.2493, reader_cost: 0.13388, ips: 32.0888 samples/sec | ETA 08:17:34 2022-07-27 11:28:43 [INFO] [TRAIN] epoch: 1, iter: 260/120000, loss: 0.7417, lr: 0.000998, batch_cost: 0.3020, reader_cost: 0.18566, ips: 26.4884 samples/sec | ETA 10:02:43 2022-07-27 11:28:46 [INFO] [TRAIN] epoch: 1, iter: 270/120000, loss: 0.7327, lr: 0.000998, batch_cost: 0.3074, reader_cost: 0.19259, ips: 26.0237 samples/sec | ETA 10:13:26 2022-07-27 11:28:48 [INFO] [TRAIN] epoch: 1, iter: 280/120000, loss: 0.7232, lr: 0.000998, batch_cost: 0.2468, reader_cost: 0.12272, ips: 32.4118 samples/sec | ETA 08:12:29 2022-07-27 11:28:51 [INFO] [TRAIN] epoch: 1, iter: 290/120000, loss: 0.7170, lr: 0.000998, batch_cost: 0.2933, reader_cost: 0.17512, ips: 27.2768 samples/sec | ETA 09:45:09 2022-07-27 11:28:53 [INFO] [TRAIN] epoch: 2, iter: 300/120000, loss: 0.7003, lr: 0.000998, batch_cost: 0.1493, reader_cost: 0.02212, ips: 53.5724 samples/sec | ETA 04:57:54 2022-07-27 11:28:54 [INFO] [TRAIN] epoch: 2, iter: 310/120000, loss: 0.7024, lr: 0.000998, batch_cost: 0.1118, reader_cost: 0.00018, ips: 71.5590 samples/sec | ETA 03:43:00 2022-07-27 11:28:55 [INFO] [TRAIN] epoch: 2, iter: 320/120000, loss: 0.7106, lr: 0.000998, batch_cost: 0.1196, reader_cost: 0.00589, ips: 66.9090 samples/sec | ETA 03:58:29 2022-07-27 11:28:56 [INFO] [TRAIN] epoch: 2, iter: 330/120000, loss: 0.7014, lr: 0.000998, batch_cost: 0.1091, reader_cost: 0.00018, ips: 73.3215 samples/sec | ETA 03:37:37 2022-07-27 11:28:57 [INFO] [TRAIN] epoch: 2, iter: 340/120000, loss: 0.6777, lr: 0.000997, batch_cost: 0.1134, reader_cost: 0.00019, ips: 70.5399 samples/sec | ETA 03:46:10 2022-07-27 11:28:59 [INFO] [TRAIN] epoch: 2, iter: 350/120000, loss: 0.6847, lr: 0.000997, batch_cost: 0.1134, reader_cost: 0.00017, ips: 70.5614 samples/sec | ETA 03:46:05 2022-07-27 11:29:00 [INFO] [TRAIN] epoch: 2, iter: 360/120000, loss: 0.6595, lr: 0.000997, batch_cost: 0.1066, reader_cost: 0.00017, ips: 75.0569 samples/sec | ETA 03:32:31 2022-07-27 11:29:01 [INFO] [TRAIN] epoch: 2, iter: 370/120000, loss: 0.6562, lr: 0.000997, batch_cost: 0.1093, reader_cost: 0.00017, ips: 73.2168 samples/sec | ETA 03:37:51 2022-07-27 11:29:02 [INFO] [TRAIN] epoch: 2, iter: 380/120000, loss: 0.6537, lr: 0.000997, batch_cost: 0.1093, reader_cost: 0.00016, ips: 73.2177 samples/sec | ETA 03:37:50 2022-07-27 11:29:03 [INFO] [TRAIN] epoch: 2, iter: 390/120000, loss: 0.6602, lr: 0.000997, batch_cost: 0.1198, reader_cost: 0.00022, ips: 66.7856 samples/sec | ETA 03:58:47 2022-07-27 11:29:04 [INFO] [TRAIN] epoch: 2, iter: 400/120000, loss: 0.6504, lr: 0.000997, batch_cost: 0.1111, reader_cost: 0.00017, ips: 72.0080 samples/sec | ETA 03:41:27 2022-07-27 11:29:05 [INFO] [TRAIN] epoch: 2, iter: 410/120000, loss: 0.6491, lr: 0.000997, batch_cost: 0.1104, reader_cost: 0.00017, ips: 72.4479 samples/sec | ETA 03:40:05 2022-07-27 11:29:06 [INFO] [TRAIN] epoch: 2, iter: 420/120000, loss: 0.6234, lr: 0.000997, batch_cost: 0.1102, reader_cost: 0.00017, ips: 72.5994 samples/sec | ETA 03:39:36 2022-07-27 11:29:07 [INFO] [TRAIN] epoch: 2, iter: 430/120000, loss: 0.6289, lr: 0.000997, batch_cost: 0.1143, reader_cost: 0.00020, ips: 70.0038 samples/sec | ETA 03:47:44 2022-07-27 11:29:09 [INFO] [TRAIN] epoch: 2, iter: 440/120000, loss: 0.6177, lr: 0.000997, batch_cost: 0.1090, reader_cost: 0.00016, ips: 73.4264 samples/sec | ETA 03:37:06 2022-07-27 11:29:10 [INFO] [TRAIN] epoch: 2, iter: 450/120000, loss: 0.6105, lr: 0.000997, batch_cost: 0.1264, reader_cost: 0.00023, ips: 63.2836 samples/sec | ETA 04:11:52 2022-07-27 11:29:11 [INFO] [TRAIN] epoch: 2, iter: 460/120000, loss: 0.6024, lr: 0.000997, batch_cost: 0.1289, reader_cost: 0.00020, ips: 62.0479 samples/sec | ETA 04:16:52 2022-07-27 11:29:12 [INFO] [TRAIN] epoch: 2, iter: 470/120000, loss: 0.6016, lr: 0.000996, batch_cost: 0.1086, reader_cost: 0.00017, ips: 73.6445 samples/sec | ETA 03:36:24 2022-07-27 11:29:13 [INFO] [TRAIN] epoch: 2, iter: 480/120000, loss: 0.6061, lr: 0.000996, batch_cost: 0.1066, reader_cost: 0.00016, ips: 75.0729 samples/sec | ETA 03:32:16 2022-07-27 11:29:14 [INFO] [TRAIN] epoch: 2, iter: 490/120000, loss: 0.5957, lr: 0.000996, batch_cost: 0.1126, reader_cost: 0.00016, ips: 71.0311 samples/sec | ETA 03:44:20 2022-07-27 11:29:15 [INFO] [TRAIN] epoch: 2, iter: 500/120000, loss: 0.5913, lr: 0.000996, batch_cost: 0.1043, reader_cost: 0.00017, ips: 76.7025 samples/sec | ETA 03:27:43 2022-07-27 11:29:16 [INFO] [TRAIN] epoch: 2, iter: 510/120000, loss: 0.5768, lr: 0.000996, batch_cost: 0.1080, reader_cost: 0.00017, ips: 74.0533 samples/sec | ETA 03:35:08 2022-07-27 11:29:18 [INFO] [TRAIN] epoch: 2, iter: 520/120000, loss: 0.5685, lr: 0.000996, batch_cost: 0.1206, reader_cost: 0.00018, ips: 66.3288 samples/sec | ETA 04:00:10 2022-07-27 11:29:19 [INFO] [TRAIN] epoch: 2, iter: 530/120000, loss: 0.5630, lr: 0.000996, batch_cost: 0.1189, reader_cost: 0.00023, ips: 67.2821 samples/sec | ETA 03:56:45 2022-07-27 11:29:20 [INFO] [TRAIN] epoch: 2, iter: 540/120000, loss: 0.5530, lr: 0.000996, batch_cost: 0.1204, reader_cost: 0.00019, ips: 66.4563 samples/sec | ETA 03:59:40 2022-07-27 11:29:21 [INFO] [TRAIN] epoch: 2, iter: 550/120000, loss: 0.5498, lr: 0.000996, batch_cost: 0.1143, reader_cost: 0.00018, ips: 69.9912 samples/sec | ETA 03:47:33 2022-07-27 11:29:22 [INFO] [TRAIN] epoch: 2, iter: 560/120000, loss: 0.5497, lr: 0.000996, batch_cost: 0.1083, reader_cost: 0.00015, ips: 73.8607 samples/sec | ETA 03:35:36 2022-07-27 11:29:23 [INFO] [TRAIN] epoch: 2, iter: 570/120000, loss: 0.5224, lr: 0.000996, batch_cost: 0.1125, reader_cost: 0.00016, ips: 71.1333 samples/sec | ETA 03:43:51 2022-07-27 11:29:25 [INFO] [TRAIN] epoch: 2, iter: 580/120000, loss: 0.5366, lr: 0.000996, batch_cost: 0.1097, reader_cost: 0.00019, ips: 72.9458 samples/sec | ETA 03:38:16 2022-07-27 11:29:26 [INFO] [TRAIN] epoch: 3, iter: 590/120000, loss: 0.5315, lr: 0.000996, batch_cost: 0.1412, reader_cost: 0.03038, ips: 56.6728 samples/sec | ETA 04:40:56 2022-07-27 11:29:27 [INFO] [TRAIN] epoch: 3, iter: 600/120000, loss: 0.5230, lr: 0.000996, batch_cost: 0.1142, reader_cost: 0.00020, ips: 70.0305 samples/sec | ETA 03:47:19 2022-07-27 11:29:28 [INFO] [TRAIN] epoch: 3, iter: 610/120000, loss: 0.5372, lr: 0.000995, batch_cost: 0.1155, reader_cost: 0.00024, ips: 69.2661 samples/sec | ETA 03:49:49 2022-07-27 11:29:29 [INFO] [TRAIN] epoch: 3, iter: 620/120000, loss: 0.5262, lr: 0.000995, batch_cost: 0.1047, reader_cost: 0.00019, ips: 76.4225 samples/sec | ETA 03:28:16 2022-07-27 11:29:30 [INFO] [TRAIN] epoch: 3, iter: 630/120000, loss: 0.5132, lr: 0.000995, batch_cost: 0.1112, reader_cost: 0.00017, ips: 71.9313 samples/sec | ETA 03:41:16 2022-07-27 11:29:31 [INFO] [TRAIN] epoch: 3, iter: 640/120000, loss: 0.5084, lr: 0.000995, batch_cost: 0.1075, reader_cost: 0.00016, ips: 74.3986 samples/sec | ETA 03:33:54 2022-07-27 11:29:33 [INFO] [TRAIN] epoch: 3, iter: 650/120000, loss: 0.4994, lr: 0.000995, batch_cost: 0.1106, reader_cost: 0.00019, ips: 72.3463 samples/sec | ETA 03:39:57 2022-07-27 11:29:34 [INFO] [TRAIN] epoch: 3, iter: 660/120000, loss: 0.4909, lr: 0.000995, batch_cost: 0.1151, reader_cost: 0.00018, ips: 69.4914 samples/sec | ETA 03:48:58 2022-07-27 11:29:35 [INFO] [TRAIN] epoch: 3, iter: 670/120000, loss: 0.4895, lr: 0.000995, batch_cost: 0.1153, reader_cost: 0.00016, ips: 69.3841 samples/sec | ETA 03:49:18 2022-07-27 11:29:36 [INFO] [TRAIN] epoch: 3, iter: 680/120000, loss: 0.4751, lr: 0.000995, batch_cost: 0.1177, reader_cost: 0.00022, ips: 67.9602 samples/sec | ETA 03:54:05 2022-07-27 11:29:37 [INFO] [TRAIN] epoch: 3, iter: 690/120000, loss: 0.4792, lr: 0.000995, batch_cost: 0.1181, reader_cost: 0.00025, ips: 67.7426 samples/sec | ETA 03:54:49 2022-07-27 11:29:38 [INFO] [TRAIN] epoch: 3, iter: 700/120000, loss: 0.4624, lr: 0.000995, batch_cost: 0.1170, reader_cost: 0.00020, ips: 68.3491 samples/sec | ETA 03:52:43 2022-07-27 11:29:40 [INFO] [TRAIN] epoch: 3, iter: 710/120000, loss: 0.4770, lr: 0.000995, batch_cost: 0.1115, reader_cost: 0.00018, ips: 71.7768 samples/sec | ETA 03:41:35 2022-07-27 11:29:41 [INFO] [TRAIN] epoch: 3, iter: 720/120000, loss: 0.4577, lr: 0.000995, batch_cost: 0.1151, reader_cost: 0.00018, ips: 69.4964 samples/sec | ETA 03:48:50 2022-07-27 11:29:42 [INFO] [TRAIN] epoch: 3, iter: 730/120000, loss: 0.4521, lr: 0.000995, batch_cost: 0.1192, reader_cost: 0.00020, ips: 67.1084 samples/sec | ETA 03:56:58 2022-07-27 11:29:43 [INFO] [TRAIN] epoch: 3, iter: 740/120000, loss: 0.4462, lr: 0.000994, batch_cost: 0.1105, reader_cost: 0.00018, ips: 72.4013 samples/sec | ETA 03:39:37 2022-07-27 11:29:44 [INFO] [TRAIN] epoch: 3, iter: 750/120000, loss: 0.4331, lr: 0.000994, batch_cost: 0.1084, reader_cost: 0.00017, ips: 73.8241 samples/sec | ETA 03:35:22 2022-07-27 11:29:45 [INFO] [TRAIN] epoch: 3, iter: 760/120000, loss: 0.4364, lr: 0.000994, batch_cost: 0.1237, reader_cost: 0.00018, ips: 64.6555 samples/sec | ETA 04:05:53 2022-07-27 11:29:46 [INFO] [TRAIN] epoch: 3, iter: 770/120000, loss: 0.4260, lr: 0.000994, batch_cost: 0.1107, reader_cost: 0.00017, ips: 72.2984 samples/sec | ETA 03:39:53 2022-07-27 11:29:47 [INFO] [TRAIN] epoch: 3, iter: 780/120000, loss: 0.4222, lr: 0.000994, batch_cost: 0.1056, reader_cost: 0.00017, ips: 75.7276 samples/sec | ETA 03:29:54 2022-07-27 11:29:49 [INFO] [TRAIN] epoch: 3, iter: 790/120000, loss: 0.4468, lr: 0.000994, batch_cost: 0.1076, reader_cost: 0.00019, ips: 74.3547 samples/sec | ETA 03:33:46 2022-07-27 11:29:50 [INFO] [TRAIN] epoch: 3, iter: 800/120000, loss: 0.4190, lr: 0.000994, batch_cost: 0.1070, reader_cost: 0.00016, ips: 74.7742 samples/sec | ETA 03:32:33 2022-07-27 11:29:51 [INFO] [TRAIN] epoch: 3, iter: 810/120000, loss: 0.4256, lr: 0.000994, batch_cost: 0.1040, reader_cost: 0.00017, ips: 76.9599 samples/sec | ETA 03:26:29 2022-07-27 11:29:52 [INFO] [TRAIN] epoch: 3, iter: 820/120000, loss: 0.4139, lr: 0.000994, batch_cost: 0.1107, reader_cost: 0.00017, ips: 72.2914 samples/sec | ETA 03:39:48 2022-07-27 11:29:53 [INFO] [TRAIN] epoch: 3, iter: 830/120000, loss: 0.4103, lr: 0.000994, batch_cost: 0.1084, reader_cost: 0.00016, ips: 73.7685 samples/sec | ETA 03:35:23 2022-07-27 11:29:54 [INFO] [TRAIN] epoch: 3, iter: 840/120000, loss: 0.4038, lr: 0.000994, batch_cost: 0.1226, reader_cost: 0.00020, ips: 65.2705 samples/sec | ETA 04:03:25 2022-07-27 11:29:55 [INFO] [TRAIN] epoch: 3, iter: 850/120000, loss: 0.3939, lr: 0.000994, batch_cost: 0.1226, reader_cost: 0.00019, ips: 65.2610 samples/sec | ETA 04:03:25 2022-07-27 11:29:56 [INFO] [TRAIN] epoch: 3, iter: 860/120000, loss: 0.3951, lr: 0.000994, batch_cost: 0.1128, reader_cost: 0.00020, ips: 70.9468 samples/sec | ETA 03:43:54 2022-07-27 11:29:58 [INFO] [TRAIN] epoch: 3, iter: 870/120000, loss: 0.4020, lr: 0.000993, batch_cost: 0.1098, reader_cost: 0.00020, ips: 72.8393 samples/sec | ETA 03:38:04 2022-07-27 11:29:59 [INFO] [TRAIN] epoch: 4, iter: 880/120000, loss: 0.3920, lr: 0.000993, batch_cost: 0.1226, reader_cost: 0.00988, ips: 65.2333 samples/sec | ETA 04:03:28 2022-07-27 11:30:00 [INFO] [TRAIN] epoch: 4, iter: 890/120000, loss: 0.3912, lr: 0.000993, batch_cost: 0.1348, reader_cost: 0.00023, ips: 59.3392 samples/sec | ETA 04:27:38 2022-07-27 11:30:01 [INFO] [TRAIN] epoch: 4, iter: 900/120000, loss: 0.3790, lr: 0.000993, batch_cost: 0.1252, reader_cost: 0.00018, ips: 63.9059 samples/sec | ETA 04:08:29 2022-07-27 11:30:03 [INFO] [TRAIN] epoch: 4, iter: 910/120000, loss: 0.3750, lr: 0.000993, batch_cost: 0.1170, reader_cost: 0.00021, ips: 68.3548 samples/sec | ETA 03:52:17 2022-07-27 11:30:04 [INFO] [TRAIN] epoch: 4, iter: 920/120000, loss: 0.3644, lr: 0.000993, batch_cost: 0.1122, reader_cost: 0.00016, ips: 71.3199 samples/sec | ETA 03:42:37 2022-07-27 11:30:05 [INFO] [TRAIN] epoch: 4, iter: 930/120000, loss: 0.3580, lr: 0.000993, batch_cost: 0.1079, reader_cost: 0.00017, ips: 74.1635 samples/sec | ETA 03:34:04 2022-07-27 11:30:06 [INFO] [TRAIN] epoch: 4, iter: 940/120000, loss: 0.3617, lr: 0.000993, batch_cost: 0.1054, reader_cost: 0.00017, ips: 75.9023 samples/sec | ETA 03:29:08 2022-07-27 11:30:07 [INFO] [TRAIN] epoch: 4, iter: 950/120000, loss: 0.3509, lr: 0.000993, batch_cost: 0.1133, reader_cost: 0.00019, ips: 70.6354 samples/sec | ETA 03:44:43 2022-07-27 11:30:08 [INFO] [TRAIN] epoch: 4, iter: 960/120000, loss: 0.3560, lr: 0.000993, batch_cost: 0.1069, reader_cost: 0.00017, ips: 74.8431 samples/sec | ETA 03:32:04 2022-07-27 11:30:09 [INFO] [TRAIN] epoch: 4, iter: 970/120000, loss: 0.3496, lr: 0.000993, batch_cost: 0.1192, reader_cost: 0.00017, ips: 67.0987 samples/sec | ETA 03:56:31 2022-07-27 11:30:10 [INFO] [TRAIN] epoch: 4, iter: 980/120000, loss: 0.3501, lr: 0.000993, batch_cost: 0.1219, reader_cost: 0.00017, ips: 65.6236 samples/sec | ETA 04:01:49 2022-07-27 11:30:12 [INFO] [TRAIN] epoch: 4, iter: 990/120000, loss: 0.3454, lr: 0.000993, batch_cost: 0.1137, reader_cost: 0.00023, ips: 70.3871 samples/sec | ETA 03:45:26 2022-07-27 11:30:13 [INFO] [TRAIN] epoch: 4, iter: 1000/120000, loss: 0.3329, lr: 0.000993, batch_cost: 0.1139, reader_cost: 0.00026, ips: 70.2501 samples/sec | ETA 03:45:51 2022-07-27 11:30:13 [INFO] Start evaluating (total_samples: 777, total_iters: 777)... Traceback (most recent call last): File "train.py", line 230, in main(args) File "train.py", line 225, in main to_static_training=cfg.to_static_training) File "/home/dell/bag-torch/darkpull/Paddle/PaddleSeg/paddleseg/core/train.py", line 289, in train **test_config) File "/home/dell/bag-torch/darkpull/Paddle/PaddleSeg/paddleseg/core/val.py", line 159, in evaluate crop_size=crop_size) File "/home/dell/bag-torch/darkpull/Paddle/PaddleSeg/paddleseg/core/infer.py", line 169, in inference logit = reverse_transform(logit, trans_info, mode='bilinear') File "/home/dell/bag-torch/darkpull/Paddle/PaddleSeg/paddleseg/core/infer.py", line 40, in reverse_transform pred = F.interpolate(pred, (h, w), mode=mode) File "/home/dell/anaconda3/envs/d2l/lib/python3.6/site-packages/paddle/nn/functional/common.py", line 361, in interpolate out_shape[i] = dim.numpy()[0] TypeError: 'tuple' object does not support item assignment

chen-del avatar Jul 27 '22 03:07 chen-del

您好!这是PaddlePaddle2.2已知bug,请升级到PaddlePaddle2.3版本或将报错位置的pred = F.interpolate(pred, (h, w), mode=mode)改写为pred = F.interpolate(pred, [h, w], mode=mode)后再次尝试。

Bobholamovic avatar Jul 28 '22 06:07 Bobholamovic

好的,谢谢

chen-del avatar Jul 28 '22 13:07 chen-del

请问怎么调整log的iter步长?默认好像是10iter log一次,但是看着太频繁了。还有eval的 iter也没看到哪里设置,我的默认是500iter测试一次。

LLsmile avatar Aug 25 '22 07:08 LLsmile

在训练时可以指定--log_iters选项调整log步长,--save_interval选项调整eval的间隔。

Bobholamovic avatar Aug 25 '22 08:08 Bobholamovic