Deeplab-v3plus
Deeplab-v3plus copied to clipboard
how about the cityscapes result ?
- thanks for you code .
- now i want to train the cityscapes mode ? how about this dataset result ? thanks again!
this is my logs for cityscapes. The only change compared to the source code is the type and encoding of the data set by myself. so maybe sth is wrong??
2018-12-12 08:30:37,474 - root - INFO - Global configuration as follows: 2018-12-12 08:30:37,474 - root - INFO - data_root_path /home/madongliang/dataset/cityscape 2018-12-12 08:30:37,474 - root - INFO - checkpoint_dir /home/madongliang/Deeplab-v3plus-master/checkpoints/ 2018-12-12 08:30:37,474 - root - INFO - result_filepath /home/madongliang/Deeplab-v3plus-master/Results/ 2018-12-12 08:30:37,474 - root - INFO - loss_weights_dir /data/linhua/Cityscapes/pretrained_weights/ 2018-12-12 08:30:37,474 - root - INFO - backbone resnet101 2018-12-12 08:30:37,474 - root - INFO - output_stride 16 2018-12-12 08:30:37,475 - root - INFO - bn_momentum 0.1 2018-12-12 08:30:37,475 - root - INFO - imagenet_pretrained True 2018-12-12 08:30:37,475 - root - INFO - pretrained_ckpt_file None 2018-12-12 08:30:37,475 - root - INFO - save_ckpt_file True 2018-12-12 08:30:37,475 - root - INFO - store_result_mask True 2018-12-12 08:30:37,475 - root - INFO - loss_weight_file None 2018-12-12 08:30:37,475 - root - INFO - validation True 2018-12-12 08:30:37,475 - root - INFO - gpu 4,5 2018-12-12 08:30:37,475 - root - INFO - batch_size_per_gpu 4 2018-12-12 08:30:37,475 - root - INFO - dataset cityscapes 2018-12-12 08:30:37,475 - root - INFO - base_size 513 2018-12-12 08:30:37,475 - root - INFO - crop_size 513 2018-12-12 08:30:37,475 - root - INFO - num_classes 20 2018-12-12 08:30:37,475 - root - INFO - data_loader_workers 4 2018-12-12 08:30:37,475 - root - INFO - pin_memory 2 2018-12-12 08:30:37,475 - root - INFO - split train 2018-12-12 08:30:37,476 - root - INFO - freeze_bn False 2018-12-12 08:30:37,476 - root - INFO - momentum 0.9 2018-12-12 08:30:37,476 - root - INFO - dampening 0 2018-12-12 08:30:37,476 - root - INFO - nesterov True 2018-12-12 08:30:37,476 - root - INFO - weight_decay 4e-05 2018-12-12 08:30:37,476 - root - INFO - lr 0.007 2018-12-12 08:30:37,476 - root - INFO - iter_max 30000 2018-12-12 08:30:37,476 - root - INFO - poly_power 0.9 2018-12-12 08:30:37,476 - root - INFO - batch_size 8 2018-12-12 08:30:37,476 - root - INFO - This model will run on Tesla K80 2018-12-12 08:30:37,480 - root - INFO - Training one epoch... 2018-12-12 08:30:48,040 - root - INFO - The train loss of epoch0-batch-0:2.967555284500122 2018-12-12 08:34:55,054 - root - INFO - The train loss of epoch0-batch-50:1.7139363288879395 2018-12-12 08:39:03,495 - root - INFO - The train loss of epoch0-batch-100:0.32207193970680237 2018-12-12 08:43:07,442 - root - INFO - The train loss of epoch0-batch-150:0.3000704050064087 2018-12-12 08:47:12,085 - root - INFO - The train loss of epoch0-batch-200:0.3226872384548187 2018-12-12 08:51:15,318 - root - INFO - The train loss of epoch0-batch-250:0.4542790651321411 2018-12-12 08:55:20,617 - root - INFO - The train loss of epoch0-batch-300:0.39157959818840027 2018-12-12 08:59:32,327 - root - INFO - The train loss of epoch0-batch-350:0.7034360766410828 2018-12-12 09:01:10,408 - root - INFO - Epoch:0, train PA1:0.6961429151544253, MPA1:0.2964517337310112, MIoU1:0.2171484299281068, FWIoU1:0.5021588740277522 2018-12-12 09:01:10,417 - root - INFO - validating one epoch... 2018-12-12 09:04:24,177 - root - INFO - Epoch:0, validation PA1:0.7620283157921703, MPA1:0.3442206272334756, MIoU1:0.2527125212548649, FWIoU1:0.595015360040294 2018-12-12 09:04:24,178 - root - INFO - The average loss of val loss:0.270036471226523 2018-12-12 09:04:24,188 - root - INFO - =>saving a new best checkpoint... 2018-12-12 09:04:28,848 - root - INFO - Training one epoch... 2018-12-12 09:04:36,472 - root - INFO - The train loss of epoch1-batch-0:0.19255661964416504 2018-12-12 09:08:40,959 - root - INFO - The train loss of epoch1-batch-50:0.26767876744270325 2018-12-12 09:12:45,228 - root - INFO - The train loss of epoch1-batch-100:0.2945781946182251 2018-12-12 09:16:49,031 - root - INFO - The train loss of epoch1-batch-150:0.25916364789009094 2018-12-12 09:20:54,007 - root - INFO - The train loss of epoch1-batch-200:0.3824508786201477 2018-12-12 09:24:58,694 - root - INFO - The train loss of epoch1-batch-250:0.2349158525466919 2018-12-12 09:29:02,170 - root - INFO - The train loss of epoch1-batch-300:0.1482490748167038 2018-12-12 09:33:06,683 - root - INFO - The train loss of epoch1-batch-350:0.1711643487215042 2018-12-12 09:34:45,053 - root - INFO - Epoch:1, train PA1:0.7243446144631995, MPA1:0.4024623107756419, MIoU1:0.2908480331019494, FWIoU1:0.5419410488335485 2018-12-12 09:34:45,060 - root - INFO - validating one epoch... 2018-12-12 09:37:52,556 - root - INFO - Epoch:1, validation PA1:0.7676819102749993, MPA1:0.4514797728935058, MIoU1:0.3257059723220806, FWIoU1:0.6096718711261555 2018-12-12 09:37:52,557 - root - INFO - The average loss of val loss:0.24240685807120416 2018-12-12 09:37:52,563 - root - INFO - =>saving a new best checkpoint... 2018-12-12 09:37:57,288 - root - INFO - Training one epoch... 2018-12-12 09:38:03,999 - root - INFO - The train loss of epoch2-batch-0:0.36900585889816284 2018-12-12 09:42:08,191 - root - INFO - The train loss of epoch2-batch-50:0.47939547896385193 2018-12-12 09:46:11,951 - root - INFO - The train loss of epoch2-batch-100:0.28905612230300903 2018-12-12 09:50:16,129 - root - INFO - The train loss of epoch2-batch-150:0.34111642837524414 2018-12-12 09:54:19,955 - root - INFO - The train loss of epoch2-batch-200:0.18281413614749908 2018-12-12 09:58:23,992 - root - INFO - The train loss of epoch2-batch-250:0.17060332000255585 2018-12-12 10:02:28,118 - root - INFO - The train loss of epoch2-batch-300:0.14751966297626495 2018-12-12 10:06:32,182 - root - INFO - The train loss of epoch2-batch-350:0.2019372582435608 2018-12-12 10:08:09,712 - root - INFO - Epoch:2, train PA1:0.7257760388953635, MPA1:0.4462834653178119, MIoU1:0.3182599883034509, FWIoU1:0.5439313017847972 2018-12-12 10:08:09,719 - root - INFO - validating one epoch... 2018-12-12 10:11:18,722 - root - INFO - Epoch:2, validation PA1:0.7665807841618761, MPA1:0.47441376477404834, MIoU1:0.33390306124894403, FWIoU1:0.6138124519062239 2018-12-12 10:11:18,722 - root - INFO - The average loss of val loss:0.2288030740474501 2018-12-12 10:11:18,731 - root - INFO - =>saving a new best checkpoint... 2018-12-12 10:11:23,507 - root - INFO - Training one epoch... 2018-12-12 10:11:30,239 - root - INFO - The train loss of epoch3-batch-0:0.3516726493835449 2018-12-12 10:15:39,260 - root - INFO - The train loss of epoch3-batch-50:0.23037758469581604 2018-12-12 10:19:44,211 - root - INFO - The train loss of epoch3-batch-100:0.26938068866729736 2018-12-12 10:23:47,616 - root - INFO - The train loss of epoch3-batch-150:0.18850867450237274 2018-12-12 10:27:51,499 - root - INFO - The train loss of epoch3-batch-200:0.16736112534999847 2018-12-12 10:31:55,123 - root - INFO - The train loss of epoch3-batch-250:0.20959851145744324 2018-12-12 10:35:59,370 - root - INFO - The train loss of epoch3-batch-300:0.19244052469730377 2018-12-12 10:40:03,204 - root - INFO - The train loss of epoch3-batch-350:0.21037077903747559 2018-12-12 10:41:41,505 - root - INFO - Epoch:3, train PA1:0.7387263289834183, MPA1:0.5085308956928948, MIoU1:0.35350845723256, FWIoU1:0.56346734387551 2018-12-12 10:41:41,512 - root - INFO - validating one epoch... 2018-12-12 10:44:50,492 - root - INFO - Epoch:3, validation PA1:0.7828036939099234, MPA1:0.48733599851914894, MIoU1:0.364988437955047, FWIoU1:0.6253185880739436 2018-12-12 10:44:50,492 - root - INFO - The average loss of val loss:0.18276182886573575 2018-12-12 10:44:50,497 - root - INFO - =>saving a new best checkpoint... 2018-12-12 10:44:55,427 - root - INFO - Training one epoch... 2018-12-12 10:45:02,355 - root - INFO - The train loss of epoch4-batch-0:0.19077648222446442 2018-12-12 10:49:05,664 - root - INFO - The train loss of epoch4-batch-50:0.13993798196315765 2018-12-12 10:53:08,714 - root - INFO - The train loss of epoch4-batch-100:0.42489537596702576 2018-12-12 10:57:12,858 - root - INFO - The train loss of epoch4-batch-150:0.21982112526893616 2018-12-12 11:01:16,680 - root - INFO - The train loss of epoch4-batch-200:0.16212211549282074 2018-12-12 11:05:20,543 - root - INFO - The train loss of epoch4-batch-250:0.1435106247663498 2018-12-12 11:09:25,825 - root - INFO - The train loss of epoch4-batch-300:0.14802013337612152 2018-12-12 11:13:29,602 - root - INFO - The train loss of epoch4-batch-350:0.1909000426530838 2018-12-12 11:15:07,652 - root - INFO - Epoch:4, train PA1:0.7399424441054087, MPA1:0.5092442590047261, MIoU1:0.3608482835912087, FWIoU1:0.5668345052249013 2018-12-12 11:15:07,659 - root - INFO - validating one epoch... 2018-12-12 11:18:14,123 - root - INFO - Epoch:4, validation PA1:0.7778976182850245, MPA1:0.5241130458041244, MIoU1:0.3728968481175001, FWIoU1:0.6221707257229833 2018-12-12 11:18:14,123 - root - INFO - The average loss of val loss:0.19552190577791584 2018-12-12 11:18:14,129 - root - INFO - =>saving a new best checkpoint... 2018-12-12 11:18:18,885 - root - INFO - Training one epoch... 2018-12-12 11:18:25,696 - root - INFO - The train loss of epoch5-batch-0:0.2944430112838745 2018-12-12 11:22:31,576 - root - INFO - The train loss of epoch5-batch-50:0.16908475756645203 2018-12-12 11:26:39,656 - root - INFO - The train loss of epoch5-batch-100:0.15660764276981354 2018-12-12 11:30:43,757 - root - INFO - The train loss of epoch5-batch-150:0.12665751576423645 2018-12-12 11:34:48,560 - root - INFO - The train loss of epoch5-batch-200:0.15401440858840942 2018-12-12 11:38:52,539 - root - INFO - The train loss of epoch5-batch-250:0.1719735562801361 2018-12-12 11:42:57,035 - root - INFO - The train loss of epoch5-batch-300:0.1488262414932251 2018-12-12 11:47:01,411 - root - INFO - The train loss of epoch5-batch-350:0.4134576618671417 2018-12-12 11:48:39,039 - root - INFO - Epoch:5, train PA1:0.7441030495925471, MPA1:0.5298351630921783, MIoU1:0.3675112742931031, FWIoU1:0.5733233480628069 2018-12-12 11:48:39,045 - root - INFO - validating one epoch... 2018-12-12 11:51:46,266 - root - INFO - Epoch:5, validation PA1:0.7778805573114492, MPA1:0.5340624758528658, MIoU1:0.38067553366009743, FWIoU1:0.6231288737632603 2018-12-12 11:51:46,267 - root - INFO - The average loss of val loss:0.1951838854339815 2018-12-12 11:51:46,275 - root - INFO - =>saving a new best checkpoint... 2018-12-12 11:51:51,024 - root - INFO - Training one epoch... 2018-12-12 11:51:57,524 - root - INFO - The train loss of epoch6-batch-0:0.13431376218795776 2018-12-12 11:56:01,030 - root - INFO - The train loss of epoch6-batch-50:0.27371877431869507 2018-12-12 12:00:05,885 - root - INFO - The train loss of epoch6-batch-100:0.15458866953849792 2018-12-12 12:04:09,443 - root - INFO - The train loss of epoch6-batch-150:0.25599223375320435 2018-12-12 12:08:13,583 - root - INFO - The train loss of epoch6-batch-200:0.2845357656478882 2018-12-12 12:12:18,826 - root - INFO - The train loss of epoch6-batch-250:0.17147934436798096 2018-12-12 12:16:22,917 - root - INFO - The train loss of epoch6-batch-300:0.21237286925315857 2018-12-12 12:20:27,596 - root - INFO - The train loss of epoch6-batch-350:0.13730943202972412 2018-12-12 12:22:05,592 - root - INFO - Epoch:6, train PA1:0.7436951595952623, MPA1:0.5656008178188213, MIoU1:0.3883604468050831, FWIoU1:0.5736035131273415 2018-12-12 12:22:05,599 - root - INFO - validating one epoch... 2018-12-12 12:25:13,583 - root - INFO - Epoch:6, validation PA1:0.7825102099239799, MPA1:0.5805938068783476, MIoU1:0.3735068275098031, FWIoU1:0.637374022748693 2018-12-12 12:25:13,584 - root - INFO - The average loss of val loss:0.1748530767377346 2018-12-12 12:25:13,589 - root - INFO - => The MIoU of val does't improve. 2018-12-12 12:25:13,590 - root - INFO - Training one epoch... 2018-12-12 12:25:19,994 - root - INFO - The train loss of epoch7-batch-0:0.1332854926586151 2018-12-12 12:29:24,135 - root - INFO - The train loss of epoch7-batch-50:0.19817699491977692 2018-12-12 12:33:29,196 - root - INFO - The train loss of epoch7-batch-100:0.15467903017997742 2018-12-12 12:37:38,969 - root - INFO - The train loss of epoch7-batch-150:0.17758919298648834 2018-12-12 12:41:44,026 - root - INFO - The train loss of epoch7-batch-200:0.14945237338542938 2018-12-12 12:45:48,174 - root - INFO - The train loss of epoch7-batch-250:0.17976956069469452 2018-12-12 12:49:51,407 - root - INFO - The train loss of epoch7-batch-300:0.14113058149814606 2018-12-12 12:53:55,676 - root - INFO - The train loss of epoch7-batch-350:0.20184843242168427 2018-12-12 12:55:33,268 - root - INFO - Epoch:7, train PA1:0.7365142909971895, MPA1:0.5665890981895574, MIoU1:0.3853398830101107, FWIoU1:0.5634804123192997 2018-12-12 12:55:33,277 - root - INFO - validating one epoch... 2018-12-12 12:58:39,822 - root - INFO - Epoch:7, validation PA1:0.7721740791732137, MPA1:0.5149064980552795, MIoU1:0.3468386088304519, FWIoU1:0.622016192951382 2018-12-12 12:58:39,822 - root - INFO - The average loss of val loss:0.23275645341603987 2018-12-12 12:58:39,829 - root - INFO - => The MIoU of val does't improve. 2018-12-12 12:58:39,830 - root - INFO - Training one epoch... 2018-12-12 12:58:47,050 - root - INFO - The train loss of epoch8-batch-0:0.18144361674785614 2018-12-12 13:02:50,295 - root - INFO - The train loss of epoch8-batch-50:0.5553913712501526 2018-12-12 13:06:54,773 - root - INFO - The train loss of epoch8-batch-100:0.14186836779117584 2018-12-12 13:10:57,903 - root - INFO - The train loss of epoch8-batch-150:0.13776084780693054 2018-12-12 13:15:02,152 - root - INFO - The train loss of epoch8-batch-200:0.15580809116363525 2018-12-12 13:19:06,501 - root - INFO - The train loss of epoch8-batch-250:0.13281050324440002 2018-12-12 13:23:10,072 - root - INFO - The train loss of epoch8-batch-300:0.1861911118030548 2018-12-12 13:27:13,935 - root - INFO - The train loss of epoch8-batch-350:0.09137999266386032 2018-12-12 13:28:52,447 - root - INFO - Epoch:8, train PA1:0.7448184487827552, MPA1:0.5742276091810926, MIoU1:0.4088299140147463, FWIoU1:0.5773657988259693 2018-12-12 13:28:52,454 - root - INFO - validating one epoch... 2018-12-12 13:31:59,788 - root - INFO - Epoch:8, validation PA1:0.7850513756706564, MPA1:0.6001979142170123, MIoU1:0.40461763838737413, FWIoU1:0.6353307470346676 2018-12-12 13:31:59,789 - root - INFO - The average loss of val loss:0.17127128534259334 2018-12-12 13:31:59,794 - root - INFO - =>saving a new best checkpoint... 2018-12-12 13:32:04,556 - root - INFO - Training one epoch... 2018-12-12 13:32:10,929 - root - INFO - The train loss of epoch9-batch-0:0.22012943029403687 2018-12-12 13:36:14,370 - root - INFO - The train loss of epoch9-batch-50:0.12105568498373032 2018-12-12 13:40:17,590 - root - INFO - The train loss of epoch9-batch-100:0.10804308950901031 2018-12-12 13:44:22,003 - root - INFO - The train loss of epoch9-batch-150:0.16024501621723175 2018-12-12 13:48:28,911 - root - INFO - The train loss of epoch9-batch-200:0.11594179272651672 2018-12-12 13:52:36,001 - root - INFO - The train loss of epoch9-batch-250:0.14700548350811005 2018-12-12 13:56:40,351 - root - INFO - The train loss of epoch9-batch-300:0.11082658171653748 2018-12-12 14:00:43,840 - root - INFO - The train loss of epoch9-batch-350:0.2100842446088791 2018-12-12 14:02:21,834 - root - INFO - Epoch:9, train PA1:0.7446674512464954, MPA1:0.5950627305622522, MIoU1:0.40180132937654867, FWIoU1:0.5802068647333234 2018-12-12 14:02:21,841 - root - INFO - validating one epoch... 2018-12-12 14:05:29,077 - root - INFO - Epoch:9, validation PA1:0.7859159924096364, MPA1:0.5837228713724285, MIoU1:0.37030984697401215, FWIoU1:0.6377302567273447 2018-12-12 14:05:29,078 - root - INFO - The average loss of val loss:0.17647860571742058 2018-12-12 14:05:29,085 - root - INFO - => The MIoU of val does't improve. 2018-12-12 14:05:29,086 - root - INFO - Training one epoch... 2018-12-12 14:05:36,036 - root - INFO - The train loss of epoch10-batch-0:0.1366995871067047 2018-12-12 14:09:40,254 - root - INFO - The train loss of epoch10-batch-50:0.259868860244751 2018-12-12 14:13:45,320 - root - INFO - The train loss of epoch10-batch-100:0.11964826285839081 2018-12-12 14:17:51,244 - root - INFO - The train loss of epoch10-batch-150:0.16944590210914612 2018-12-12 14:21:55,214 - root - INFO - The train loss of epoch10-batch-200:0.0987149327993393 2018-12-12 14:25:58,684 - root - INFO - The train loss of epoch10-batch-250:0.10562938451766968 2018-12-12 14:30:03,890 - root - INFO - The train loss of epoch10-batch-300:0.1157590001821518 2018-12-12 14:34:07,081 - root - INFO - The train loss of epoch10-batch-350:0.16231690347194672 2018-12-12 14:35:45,188 - root - INFO - Epoch:10, train PA1:0.7510362052613563, MPA1:0.6005354245656486, MIoU1:0.4267396381910575, FWIoU1:0.5865631693507771 2018-12-12 14:35:45,196 - root - INFO - validating one epoch... 2018-12-12 14:38:52,556 - root - INFO - Epoch:10, validation PA1:0.7832714265909592, MPA1:0.5828901800751862, MIoU1:0.38505385813641696, FWIoU1:0.6367782506444297 2018-12-12 14:38:52,556 - root - INFO - The average loss of val loss:0.1969949165659566 2018-12-12 14:38:52,562 - root - INFO - => The MIoU of val does't improve. 2018-12-12 14:38:52,563 - root - INFO - Training one epoch... 2018-12-12 14:38:59,189 - root - INFO - The train loss of epoch11-batch-0:0.13801497220993042 2018-12-12 14:43:03,803 - root - INFO - The train loss of epoch11-batch-50:0.16296780109405518 2018-12-12 14:47:08,295 - root - INFO - The train loss of epoch11-batch-100:0.17280197143554688 2018-12-12 14:51:11,779 - root - INFO - The train loss of epoch11-batch-150:0.11877284198999405 2018-12-12 14:55:15,457 - root - INFO - The train loss of epoch11-batch-200:0.14307524263858795 2018-12-12 14:59:22,959 - root - INFO - The train loss of epoch11-batch-250:0.2825978398323059 2018-12-12 15:03:27,903 - root - INFO - The train loss of epoch11-batch-300:0.1422901153564453 2018-12-12 15:07:30,913 - root - INFO - The train loss of epoch11-batch-350:0.12340075522661209 2018-12-12 15:09:09,259 - root - INFO - Epoch:11, train PA1:0.7456987351521905, MPA1:0.628475778664425, MIoU1:0.4368654341556712, FWIoU1:0.5794185516202887 2018-12-12 15:09:09,266 - root - INFO - validating one epoch... 2018-12-12 15:12:17,291 - root - INFO - Epoch:11, validation PA1:0.7860297807529296, MPA1:0.5738895722405126, MIoU1:0.3860211775174831, FWIoU1:0.639338946818815 2018-12-12 15:12:17,291 - root - INFO - The average loss of val loss:0.20049891092123523 2018-12-12 15:12:17,296 - root - INFO - => The MIoU of val does't improve. 2018-12-12 15:12:17,297 - root - INFO - Training one epoch... 2018-12-12 15:12:24,218 - root - INFO - The train loss of epoch12-batch-0:0.1474299281835556 2018-12-12 15:16:28,946 - root - INFO - The train loss of epoch12-batch-50:0.1759689450263977 2018-12-12 15:20:34,309 - root - INFO - The train loss of epoch12-batch-100:0.16098175942897797 2018-12-12 15:24:37,249 - root - INFO - The train loss of epoch12-batch-150:0.1516423374414444 2018-12-12 15:28:41,019 - root - INFO - The train loss of epoch12-batch-200:0.11246465891599655 2018-12-12 15:32:45,484 - root - INFO - The train loss of epoch12-batch-250:0.20294906198978424 2018-12-12 15:36:50,714 - root - INFO - The train loss of epoch12-batch-300:0.09759247303009033 2018-12-12 15:40:55,340 - root - INFO - The train loss of epoch12-batch-350:0.10318053513765335 2018-12-12 15:42:33,678 - root - INFO - Epoch:12, train PA1:0.7487231642598268, MPA1:0.6276552244444319, MIoU1:0.4322473274092638, FWIoU1:0.5862980413440746 2018-12-12 15:42:33,686 - root - INFO - validating one epoch... 2018-12-12 15:45:41,115 - root - INFO - Epoch:12, validation PA1:0.7836202150978906, MPA1:0.5991113187830577, MIoU1:0.379619132506495, FWIoU1:0.6375642574080632 2018-12-12 15:45:41,116 - root - INFO - The average loss of val loss:0.2384076714515686 2018-12-12 15:45:41,122 - root - INFO - => The MIoU of val does't improve. 2018-12-12 15:45:41,123 - root - INFO - Training one epoch... 2018-12-12 15:45:47,681 - root - INFO - The train loss of epoch13-batch-0:0.19551491737365723 2018-12-12 15:49:51,125 - root - INFO - The train loss of epoch13-batch-50:0.0849480852484703 2018-12-12 15:53:55,565 - root - INFO - The train loss of epoch13-batch-100:0.1918475329875946 2018-12-12 15:57:58,789 - root - INFO - The train loss of epoch13-batch-150:0.5183513760566711 2018-12-12 16:02:02,255 - root - INFO - The train loss of epoch13-batch-200:0.20539335906505585 2018-12-12 16:06:07,586 - root - INFO - The train loss of epoch13-batch-250:0.24992644786834717 2018-12-12 16:10:15,046 - root - INFO - The train loss of epoch13-batch-300:0.2601530849933624 2018-12-12 16:14:21,010 - root - INFO - The train loss of epoch13-batch-350:0.11010094732046127 2018-12-12 16:15:59,925 - root - INFO - Epoch:13, train PA1:0.7481008470580007, MPA1:0.6273226783265436, MIoU1:0.43285562796436183, FWIoU1:0.5822019131478814 2018-12-12 16:15:59,932 - root - INFO - validating one epoch... 2018-12-12 16:19:06,487 - root - INFO - Epoch:13, validation PA1:0.7721207511817195, MPA1:0.5445812982900113, MIoU1:0.3646489108121691, FWIoU1:0.6183142770767718 2018-12-12 16:19:06,487 - root - INFO - The average loss of val loss:0.2570755943175285 2018-12-12 16:19:06,495 - root - INFO - => The MIoU of val does't improve. 2018-12-12 16:19:06,496 - root - INFO - Training one epoch... 2018-12-12 16:19:13,381 - root - INFO - The train loss of epoch14-batch-0:0.27512699365615845 2018-12-12 16:23:17,111 - root - INFO - The train loss of epoch14-batch-50:0.08512695133686066 2018-12-12 16:27:21,343 - root - INFO - The train loss of epoch14-batch-100:0.14758627116680145 2018-12-12 16:31:24,465 - root - INFO - The train loss of epoch14-batch-150:0.1400640606880188 2018-12-12 16:35:27,985 - root - INFO - The train loss of epoch14-batch-200:0.12247773259878159 2018-12-12 16:39:31,926 - root - INFO - The train loss of epoch14-batch-250:0.24704785645008087 2018-12-12 16:43:36,176 - root - INFO - The train loss of epoch14-batch-300:0.1217755377292633 2018-12-12 16:47:40,219 - root - INFO - The train loss of epoch14-batch-350:0.09922905266284943 2018-12-12 16:49:17,891 - root - INFO - Epoch:14, train PA1:0.7448647407645436, MPA1:0.6378210875323885, MIoU1:0.44631451453242316, FWIoU1:0.5774980543292648 2018-12-12 16:49:17,899 - root - INFO - validating one epoch... 2018-12-12 16:52:25,389 - root - INFO - Epoch:14, validation PA1:0.7857888280179093, MPA1:0.5812158586937558, MIoU1:0.38825581381391244, FWIoU1:0.6385556855109527 2018-12-12 16:52:25,389 - root - INFO - The average loss of val loss:0.18004687478946102 2018-12-12 16:52:25,395 - root - INFO - => The MIoU of val does't improve. 2018-12-12 16:52:25,396 - root - INFO - Training one epoch... 2018-12-12 16:52:31,689 - root - INFO - The train loss of epoch15-batch-0:0.10235660523176193 2018-12-12 16:56:35,611 - root - INFO - The train loss of epoch15-batch-50:0.10438527166843414 2018-12-12 17:00:39,643 - root - INFO - The train loss of epoch15-batch-100:0.10802784562110901 2018-12-12 17:04:45,336 - root - INFO - The train loss of epoch15-batch-150:0.11742466688156128 2018-12-12 17:08:50,119 - root - INFO - The train loss of epoch15-batch-200:0.12528640031814575 2018-12-12 17:12:54,521 - root - INFO - The train loss of epoch15-batch-250:0.14107276499271393 2018-12-12 17:16:59,799 - root - INFO - The train loss of epoch15-batch-300:0.2517237961292267 2018-12-12 17:21:07,332 - root - INFO - The train loss of epoch15-batch-350:0.12957598268985748 2018-12-12 17:22:48,556 - root - INFO - Epoch:15, train PA1:0.7509176100639173, MPA1:0.6557766474826996, MIoU1:0.45517846615460017, FWIoU1:0.5858046414246652 2018-12-12 17:22:48,562 - root - INFO - validating one epoch... 2018-12-12 17:25:55,680 - root - INFO - Epoch:15, validation PA1:0.7593563313724935, MPA1:0.5538002012110784, MIoU1:0.3635363148360324, FWIoU1:0.600875569887106 2018-12-12 17:25:55,680 - root - INFO - The average loss of val loss:0.301701687396534 2018-12-12 17:25:55,716 - root - INFO - => The MIoU of val does't improve. 2018-12-12 17:25:55,717 - root - INFO - Training one epoch... 2018-12-12 17:26:02,312 - root - INFO - The train loss of epoch16-batch-0:0.12597505748271942 2018-12-12 17:30:06,601 - root - INFO - The train loss of epoch16-batch-50:0.15957960486412048 2018-12-12 17:34:11,791 - root - INFO - The train loss of epoch16-batch-100:0.1564416140317917 2018-12-12 17:38:16,257 - root - INFO - The train loss of epoch16-batch-150:0.16271670162677765 2018-12-12 17:42:19,710 - root - INFO - The train loss of epoch16-batch-200:0.12289201468229294 2018-12-12 17:46:23,401 - root - INFO - The train loss of epoch16-batch-250:0.4152675271034241 2018-12-12 17:50:26,882 - root - INFO - The train loss of epoch16-batch-300:0.2786383628845215 2018-12-12 17:54:31,346 - root - INFO - The train loss of epoch16-batch-350:0.11129313707351685 2018-12-12 17:56:09,145 - root - INFO - Epoch:16, train PA1:0.7484140380866224, MPA1:0.6309229965599981, MIoU1:0.43459043141463605, FWIoU1:0.5813231850420914 2018-12-12 17:56:09,155 - root - INFO - validating one epoch... 2018-12-12 17:59:17,287 - root - INFO - Epoch:16, validation PA1:0.7856721208461777, MPA1:0.58692626627987, MIoU1:0.3928136735167102, FWIoU1:0.640935820124574 2018-12-12 17:59:17,287 - root - INFO - The average loss of val loss:0.19369546011570962 2018-12-12 17:59:17,293 - root - INFO - => The MIoU of val does't improve. 2018-12-12 17:59:17,294 - root - INFO - Training one epoch... 2018-12-12 17:59:24,083 - root - INFO - The train loss of epoch17-batch-0:0.10499657690525055 2018-12-12 18:03:29,265 - root - INFO - The train loss of epoch17-batch-50:0.13975876569747925 2018-12-12 18:07:33,469 - root - INFO - The train loss of epoch17-batch-100:0.10341392457485199 2018-12-12 18:11:37,996 - root - INFO - The train loss of epoch17-batch-150:0.11973337829113007 2018-12-12 18:15:42,198 - root - INFO - The train loss of epoch17-batch-200:0.11577173322439194 2018-12-12 18:19:46,780 - root - INFO - The train loss of epoch17-batch-250:0.11638207733631134 2018-12-12 18:23:51,054 - root - INFO - The train loss of epoch17-batch-300:0.10816638916730881 2018-12-12 18:27:56,799 - root - INFO - The train loss of epoch17-batch-350:0.10213184356689453 2018-12-12 18:29:35,198 - root - INFO - Epoch:17, train PA1:0.7489157347047826, MPA1:0.6677796221130278, MIoU1:0.45490978740112664, FWIoU1:0.583402058223672 2018-12-12 18:29:35,205 - root - INFO - validating one epoch... 2018-12-12 18:32:49,828 - root - INFO - Epoch:17, validation PA1:0.7891089072654037, MPA1:0.5942698037089491, MIoU1:0.4180872123096087, FWIoU1:0.6423813058749204 2018-12-12 18:32:49,828 - root - INFO - The average loss of val loss:0.1717770276290755 2018-12-12 18:32:49,833 - root - INFO - =>saving a new best checkpoint... 2018-12-12 18:32:54,664 - root - INFO - Training one epoch... 2018-12-12 18:33:01,261 - root - INFO - The train loss of epoch18-batch-0:0.11123668402433395 2018-12-12 18:37:05,460 - root - INFO - The train loss of epoch18-batch-50:0.09526129066944122 2018-12-12 18:41:09,215 - root - INFO - The train loss of epoch18-batch-100:0.12198017537593842 2018-12-12 18:45:13,359 - root - INFO - The train loss of epoch18-batch-150:0.11738774180412292 2018-12-12 18:49:18,332 - root - INFO - The train loss of epoch18-batch-200:0.1836605817079544 2018-12-12 18:53:22,066 - root - INFO - The train loss of epoch18-batch-250:0.10954441130161285 2018-12-12 18:57:25,974 - root - INFO - The train loss of epoch18-batch-300:0.13928654789924622 2018-12-12 19:01:30,911 - root - INFO - The train loss of epoch18-batch-350:0.14673884212970734 2018-12-12 19:03:09,154 - root - INFO - Epoch:18, train PA1:0.7519732971850798, MPA1:0.6562775678475367, MIoU1:0.4480403241073014, FWIoU1:0.5862776567490043 2018-12-12 19:03:09,164 - root - INFO - validating one epoch... 2018-12-12 19:06:16,090 - root - INFO - Epoch:18, validation PA1:0.7864651228653635, MPA1:0.5991179208203142, MIoU1:0.3944879388332763, FWIoU1:0.6417437932093085 2018-12-12 19:06:16,091 - root - INFO - The average loss of val loss:0.17899394732329152 2018-12-12 19:06:16,097 - root - INFO - => The MIoU of val does't improve. 2018-12-12 19:06:16,098 - root - INFO - Training one epoch... 2018-12-12 19:06:22,730 - root - INFO - The train loss of epoch19-batch-0:0.0914904773235321 2018-12-12 19:10:26,341 - root - INFO - The train loss of epoch19-batch-50:0.11041565239429474 2018-12-12 19:14:30,427 - root - INFO - The train loss of epoch19-batch-100:0.11896444857120514 2018-12-12 19:18:34,295 - root - INFO - The train loss of epoch19-batch-150:0.0893687754869461 2018-12-12 19:22:38,163 - root - INFO - The train loss of epoch19-batch-200:0.14844857156276703 2018-12-12 19:26:42,004 - root - INFO - The train loss of epoch19-batch-250:0.09967795014381409 2018-12-12 19:30:45,905 - root - INFO - The train loss of epoch19-batch-300:0.17413559556007385 2018-12-12 19:34:48,806 - root - INFO - The train loss of epoch19-batch-350:0.10900850594043732 2018-12-12 19:36:25,822 - root - INFO - Epoch:19, train PA1:0.7511719074188223, MPA1:0.6765837109365714, MIoU1:0.45760276515952, FWIoU1:0.5864483364488091 2018-12-12 19:36:25,827 - root - INFO - validating one epoch... 2018-12-12 19:39:35,105 - root - INFO - Epoch:19, validation PA1:0.786367208045756, MPA1:0.6006380446091161, MIoU1:0.4130262268643666, FWIoU1:0.6433821463022337 2018-12-12 19:39:35,106 - root - INFO - The average loss of val loss:0.16888957410570113 2018-12-12 19:39:35,112 - root - INFO - => The MIoU of val does't improve. 2018-12-12 19:39:35,113 - root - INFO - Training one epoch... 2018-12-12 19:39:41,542 - root - INFO - The train loss of epoch20-batch-0:0.16853167116641998 2018-12-12 19:43:50,959 - root - INFO - The train loss of epoch20-batch-50:0.11838030815124512 2018-12-12 19:48:01,359 - root - INFO - The train loss of epoch20-batch-100:0.7974161505699158 2018-12-12 19:52:06,261 - root - INFO - The train loss of epoch20-batch-150:0.22332073748111725 2018-12-12 19:56:10,525 - root - INFO - The train loss of epoch20-batch-200:0.0908534824848175 2018-12-12 20:00:15,211 - root - INFO - The train loss of epoch20-batch-250:0.15880219638347626 2018-12-12 20:04:19,720 - root - INFO - The train loss of epoch20-batch-300:0.13043925166130066 2018-12-12 20:08:23,756 - root - INFO - The train loss of epoch20-batch-350:0.16056592762470245 2018-12-12 20:10:02,105 - root - INFO - Epoch:20, train PA1:0.7568776175812496, MPA1:0.6878068189061436, MIoU1:0.4633168897974489, FWIoU1:0.5933537935803962 2018-12-12 20:10:02,113 - root - INFO - validating one epoch... 2018-12-12 20:13:10,259 - root - INFO - Epoch:20, validation PA1:0.7863760564626754, MPA1:0.6005619538969293, MIoU1:0.4001688490957077, FWIoU1:0.6410147587081944 2018-12-12 20:13:10,260 - root - INFO - The average loss of val loss:0.21958192353767733 2018-12-12 20:13:10,267 - root - INFO - => The MIoU of val does't improve. 2018-12-12 20:13:10,268 - root - INFO - Training one epoch... 2018-12-12 20:13:16,844 - root - INFO - The train loss of epoch21-batch-0:0.10280697792768478 2018-12-12 20:17:20,187 - root - INFO - The train loss of epoch21-batch-50:0.10919128358364105 2018-12-12 20:21:24,309 - root - INFO - The train loss of epoch21-batch-100:0.09741698950529099 2018-12-12 20:25:29,239 - root - INFO - The train loss of epoch21-batch-150:0.16055110096931458 2018-12-12 20:29:32,815 - root - INFO - The train loss of epoch21-batch-200:0.11121668666601181 2018-12-12 20:33:37,549 - root - INFO - The train loss of epoch21-batch-250:0.08807151019573212 2018-12-12 20:37:42,631 - root - INFO - The train loss of epoch21-batch-300:0.12353835254907608 2018-12-12 20:41:47,825 - root - INFO - The train loss of epoch21-batch-350:0.08564437180757523 2018-12-12 20:43:25,509 - root - INFO - Epoch:21, train PA1:0.7530624620713782, MPA1:0.6868490301104715, MIoU1:0.4545365641341276, FWIoU1:0.5896397755009691 2018-12-12 20:43:25,516 - root - INFO - validating one epoch... 2018-12-12 20:46:32,232 - root - INFO - Epoch:21, validation PA1:0.7848193019964235, MPA1:0.6065015929599739, MIoU1:0.40029615702118837, FWIoU1:0.644064705563131 2018-12-12 20:46:32,233 - root - INFO - The average loss of val loss:0.21603430531197979 2018-12-12 20:46:32,240 - root - INFO - => The MIoU of val does't improve. 2018-12-12 20:46:32,241 - root - INFO - Training one epoch... 2018-12-12 20:46:38,769 - root - INFO - The train loss of epoch22-batch-0:0.09317547082901001 2018-12-12 20:50:44,257 - root - INFO - The train loss of epoch22-batch-50:0.10607289522886276 2018-12-12 20:54:52,018 - root - INFO - The train loss of epoch22-batch-100:0.08917468786239624 2018-12-12 20:58:57,001 - root - INFO - The train loss of epoch22-batch-150:0.10013114660978317 2018-12-12 21:03:00,370 - root - INFO - The train loss of epoch22-batch-200:0.12197822332382202 2018-12-12 21:07:04,146 - root - INFO - The train loss of epoch22-batch-250:0.13654834032058716 2018-12-12 21:11:08,190 - root - INFO - The train loss of epoch22-batch-300:0.10214214771986008 2018-12-12 21:15:13,147 - root - INFO - The train loss of epoch22-batch-350:0.14144639670848846 2018-12-12 21:16:51,369 - root - INFO - Epoch:22, train PA1:0.7541945799968104, MPA1:0.6764869075189209, MIoU1:0.4392211979886601, FWIoU1:0.591763691160695 2018-12-12 21:16:51,374 - root - INFO - validating one epoch... 2018-12-12 21:19:58,778 - root - INFO - Epoch:22, validation PA1:0.7857952096034451, MPA1:0.6312817649365425, MIoU1:0.3887624245155447, FWIoU1:0.6424955054933842 2018-12-12 21:19:58,779 - root - INFO - The average loss of val loss:0.20564415750484313 2018-12-12 21:19:58,786 - root - INFO - => The MIoU of val does't improve. 2018-12-12 21:19:58,788 - root - INFO - Training one epoch... 2018-12-12 21:20:05,176 - root - INFO - The train loss of epoch23-batch-0:0.13458982110023499 2018-12-12 21:24:10,206 - root - INFO - The train loss of epoch23-batch-50:0.09281453490257263 2018-12-12 21:28:15,391 - root - INFO - The train loss of epoch23-batch-100:0.14077027142047882 2018-12-12 21:32:20,432 - root - INFO - The train loss of epoch23-batch-150:0.10756779462099075 2018-12-12 21:36:25,780 - root - INFO - The train loss of epoch23-batch-200:0.1170162707567215 2018-12-12 21:40:30,082 - root - INFO - The train loss of epoch23-batch-250:0.13048365712165833 2018-12-12 21:44:35,881 - root - INFO - The train loss of epoch23-batch-300:0.13756370544433594 2018-12-12 21:48:41,191 - root - INFO - The train loss of epoch23-batch-350:0.09320113062858582 2018-12-12 21:50:18,872 - root - INFO - Epoch:23, train PA1:0.7590532600683281, MPA1:0.694160094052876, MIoU1:0.4484680138772686, FWIoU1:0.6014203505706323 2018-12-12 21:50:18,883 - root - INFO - validating one epoch... 2018-12-12 21:53:25,683 - root - INFO - Epoch:23, validation PA1:0.7854036269346852, MPA1:0.5869961666671545, MIoU1:0.39228199068695047, FWIoU1:0.638836412435016 2018-12-12 21:53:25,684 - root - INFO - The average loss of val loss:0.23937914490459428 2018-12-12 21:53:25,690 - root - INFO - => The MIoU of val does't improve. 2018-12-12 21:53:25,691 - root - INFO - Training one epoch... 2018-12-12 21:53:32,189 - root - INFO - The train loss of epoch24-batch-0:0.11177954077720642 2018-12-12 21:57:37,536 - root - INFO - The train loss of epoch24-batch-50:0.13480573892593384 2018-12-12 22:01:42,499 - root - INFO - The train loss of epoch24-batch-100:0.09642475098371506 2018-12-12 22:05:50,465 - root - INFO - The train loss of epoch24-batch-150:0.16782110929489136 2018-12-12 22:09:56,684 - root - INFO - The train loss of epoch24-batch-200:0.13806886970996857 2018-12-12 22:14:01,061 - root - INFO - The train loss of epoch24-batch-250:0.08425862342119217 2018-12-12 22:18:05,183 - root - INFO - The train loss of epoch24-batch-300:0.08118589967489243 2018-12-12 22:22:11,370 - root - INFO - The train loss of epoch24-batch-350:0.1404981166124344 2018-12-12 22:23:48,973 - root - INFO - Epoch:24, train PA1:0.7595607870334394, MPA1:0.6916200629506692, MIoU1:0.461762805717932, FWIoU1:0.5973214120240707 2018-12-12 22:23:48,979 - root - INFO - validating one epoch... 2018-12-12 22:26:56,226 - root - INFO - Epoch:24, validation PA1:0.7880216168587363, MPA1:0.5910142833691013, MIoU1:0.3932364962608528, FWIoU1:0.6429433749765807 2018-12-12 22:26:56,227 - root - INFO - The average loss of val loss:0.22321392770015425 2018-12-12 22:26:56,233 - root - INFO - => The MIoU of val does't improve. 2018-12-12 22:26:56,234 - root - INFO - Training one epoch... 2018-12-12 22:27:02,618 - root - INFO - The train loss of epoch25-batch-0:0.10860061645507812 2018-12-12 22:31:06,256 - root - INFO - The train loss of epoch25-batch-50:0.11609742790460587 2018-12-12 22:35:11,606 - root - INFO - The train loss of epoch25-batch-100:0.09148228913545609 2018-12-12 22:39:15,469 - root - INFO - The train loss of epoch25-batch-150:0.0885591134428978 2018-12-12 22:43:19,123 - root - INFO - The train loss of epoch25-batch-200:0.09069772809743881 2018-12-12 22:47:24,015 - root - INFO - The train loss of epoch25-batch-250:0.11499287188053131 2018-12-12 22:51:28,079 - root - INFO - The train loss of epoch25-batch-300:0.2365267425775528 2018-12-12 22:55:32,866 - root - INFO - The train loss of epoch25-batch-350:0.1296582818031311 2018-12-12 22:57:10,662 - root - INFO - Epoch:25, train PA1:0.7535710375771468, MPA1:0.6901035759814184, MIoU1:0.4551123369860419, FWIoU1:0.5903328398163845 2018-12-12 22:57:10,668 - root - INFO - validating one epoch... 2018-12-12 23:00:19,444 - root - INFO - Epoch:25, validation PA1:0.7831291930770844, MPA1:0.6238205665891008, MIoU1:0.39741522142937014, FWIoU1:0.639639372140998 2018-12-12 23:00:19,444 - root - INFO - The average loss of val loss:0.22199942213633367 2018-12-12 23:00:19,451 - root - INFO - => The MIoU of val does't improve. 2018-12-12 23:00:19,452 - root - INFO - Training one epoch... 2018-12-12 23:00:26,053 - root - INFO - The train loss of epoch26-batch-0:0.07491037249565125 2018-12-12 23:04:31,639 - root - INFO - The train loss of epoch26-batch-50:0.11224964261054993 2018-12-12 23:08:36,427 - root - INFO - The train loss of epoch26-batch-100:0.12504178285598755 2018-12-12 23:12:41,786 - root - INFO - The train loss of epoch26-batch-150:0.16003046929836273 2018-12-12 23:16:51,075 - root - INFO - The train loss of epoch26-batch-200:0.06037801876664162 2018-12-12 23:20:55,567 - root - INFO - The train loss of epoch26-batch-250:0.22855515778064728 2018-12-12 23:24:58,697 - root - INFO - The train loss of epoch26-batch-300:0.11208122223615646 2018-12-12 23:29:01,336 - root - INFO - The train loss of epoch26-batch-350:0.12067537754774094 2018-12-12 23:30:39,100 - root - INFO - Epoch:26, train PA1:0.7564640099007229, MPA1:0.7112587514255513, MIoU1:0.4785429257825994, FWIoU1:0.593309752546674 2018-12-12 23:30:39,105 - root - INFO - validating one epoch... 2018-12-12 23:33:45,690 - root - INFO - Epoch:26, validation PA1:0.7858520156739708, MPA1:0.6130470259105573, MIoU1:0.3978014536119698, FWIoU1:0.6426300503638495 2018-12-12 23:33:45,690 - root - INFO - The average loss of val loss:0.2075625010315449 2018-12-12 23:33:45,695 - root - INFO - => The MIoU of val does't improve. 2018-12-12 23:33:45,696 - root - INFO - Training one epoch... 2018-12-12 23:33:52,198 - root - INFO - The train loss of epoch27-batch-0:0.08575479686260223 2018-12-12 23:37:56,643 - root - INFO - The train loss of epoch27-batch-50:0.13078656792640686 2018-12-12 23:42:00,061 - root - INFO - The train loss of epoch27-batch-100:0.11012041568756104 2018-12-12 23:46:05,310 - root - INFO - The train loss of epoch27-batch-150:0.10869774967432022 2018-12-12 23:50:10,588 - root - INFO - The train loss of epoch27-batch-200:0.10567750036716461 2018-12-12 23:54:15,045 - root - INFO - The train loss of epoch27-batch-250:0.12033048272132874 2018-12-12 23:58:19,876 - root - INFO - The train loss of epoch27-batch-300:0.14886067807674408 2018-12-13 00:02:24,496 - root - INFO - The train loss of epoch27-batch-350:0.16874447464942932 2018-12-13 00:04:02,470 - root - INFO - Epoch:27, train PA1:0.7577658979529608, MPA1:0.692978821049246, MIoU1:0.46427591396796036, FWIoU1:0.5967421083300929 2018-12-13 00:04:02,476 - root - INFO - validating one epoch... 2018-12-13 00:07:09,325 - root - INFO - Epoch:27, validation PA1:0.778554170820443, MPA1:0.5801257105129559, MIoU1:0.38765050911791704, FWIoU1:0.6319614557574459 2018-12-13 00:07:09,325 - root - INFO - The average loss of val loss:0.22915383200010947 2018-12-13 00:07:09,332 - root - INFO - => The MIoU of val does't improve. 2018-12-13 00:07:09,333 - root - INFO - Training one epoch... 2018-12-13 00:07:15,861 - root - INFO - The train loss of epoch28-batch-0:0.0782575011253357 2018-12-13 00:11:20,134 - root - INFO - The train loss of epoch28-batch-50:0.212476909160614 2018-12-13 00:15:24,397 - root - INFO - The train loss of epoch28-batch-100:0.08853840082883835 2018-12-13 00:19:28,054 - root - INFO - The train loss of epoch28-batch-150:0.13232888281345367 2018-12-13 00:23:33,103 - root - INFO - The train loss of epoch28-batch-200:0.16256648302078247 2018-12-13 00:27:42,283 - root - INFO - The train loss of epoch28-batch-250:0.10705065727233887 2018-12-13 00:31:47,883 - root - INFO - The train loss of epoch28-batch-300:0.07925765216350555 2018-12-13 00:35:52,280 - root - INFO - The train loss of epoch28-batch-350:0.1092686876654625 2018-12-13 00:37:30,678 - root - INFO - Epoch:28, train PA1:0.755962525807287, MPA1:0.7007335279899797, MIoU1:0.4772542456984472, FWIoU1:0.5930903641242199 2018-12-13 00:37:30,697 - root - INFO - validating one epoch... 2018-12-13 00:40:39,277 - root - INFO - Epoch:28, validation PA1:0.7830621213107387, MPA1:0.6062458480688894, MIoU1:0.38852687480792786, FWIoU1:0.6411588589393867 2018-12-13 00:40:39,310 - root - INFO - The average loss of val loss:0.2161054107090158 2018-12-13 00:40:39,332 - root - INFO - => The MIoU of val does't improve. 2018-12-13 00:40:39,333 - root - INFO - Training one epoch... 2018-12-13 00:40:47,754 - root - INFO - The train loss of epoch29-batch-0:0.14257587492465973 2018-12-13 00:44:52,918 - root - INFO - The train loss of epoch29-batch-50:0.18293069303035736 2018-12-13 00:48:56,763 - root - INFO - The train loss of epoch29-batch-100:0.08804529160261154 2018-12-13 00:53:01,135 - root - INFO - The train loss of epoch29-batch-150:0.11093214899301529 2018-12-13 00:57:06,071 - root - INFO - The train loss of epoch29-batch-200:0.11674967408180237 2018-12-13 01:01:09,409 - root - INFO - The train loss of epoch29-batch-250:0.11405158787965775 2018-12-13 01:05:12,834 - root - INFO - The train loss of epoch29-batch-300:0.076045460999012 2018-12-13 01:09:17,841 - root - INFO - The train loss of epoch29-batch-350:0.1354605257511139 2018-12-13 01:10:55,332 - root - INFO - Epoch:29, train PA1:0.7598704803659981, MPA1:0.7133996000709182, MIoU1:0.47617361325016583, FWIoU1:0.5977305099597852 2018-12-13 01:10:55,344 - root - INFO - validating one epoch... 2018-12-13 01:14:02,465 - root - INFO - Epoch:29, validation PA1:0.7849856905393431, MPA1:0.6032692590005926, MIoU1:0.39611212634574866, FWIoU1:0.6391988453978735 2018-12-13 01:14:02,466 - root - INFO - The average loss of val loss:0.2558833350457491 2018-12-13 01:14:02,470 - root - INFO - => The MIoU of val does't improve. 2018-12-13 01:14:02,471 - root - INFO - Training one epoch... 2018-12-13 01:14:09,181 - root - INFO - The train loss of epoch30-batch-0:0.10602705180644989 2018-12-13 01:18:12,830 - root - INFO - The train loss of epoch30-batch-50:0.13141076266765594 2018-12-13 01:22:18,413 - root - INFO - The train loss of epoch30-batch-100:0.12104832381010056 2018-12-13 01:26:22,569 - root - INFO - The train loss of epoch30-batch-150:0.08633331954479218 2018-12-13 01:30:25,998 - root - INFO - The train loss of epoch30-batch-200:0.1031922772526741 2018-12-13 01:34:31,135 - root - INFO - The train loss of epoch30-batch-250:0.08448202162981033 2018-12-13 01:38:36,180 - root - INFO - The train loss of epoch30-batch-300:0.11810954660177231 2018-12-13 01:42:44,783 - root - INFO - The train loss of epoch30-batch-350:0.0872756689786911 2018-12-13 01:44:23,211 - root - INFO - Epoch:30, train PA1:0.7526337997027092, MPA1:0.7130180439127906, MIoU1:0.4719883311815255, FWIoU1:0.5905582312908 2018-12-13 01:44:23,217 - root - INFO - validating one epoch... 2018-12-13 01:47:30,654 - root - INFO - Epoch:30, validation PA1:0.788240008045854, MPA1:0.6014886536402383, MIoU1:0.40494991742217595, FWIoU1:0.6426186183175991 2018-12-13 01:47:30,654 - root - INFO - The average loss of val loss:0.22233581579019945 2018-12-13 01:47:30,660 - root - INFO - => The MIoU of val does't improve. 2018-12-13 01:47:30,661 - root - INFO - Training one epoch... 2018-12-13 01:47:37,348 - root - INFO - The train loss of epoch31-batch-0:0.0920816957950592 2018-12-13 01:51:41,276 - root - INFO - The train loss of epoch31-batch-50:0.09258700907230377 2018-12-13 01:55:46,436 - root - INFO - The train loss of epoch31-batch-100:0.09475740045309067 2018-12-13 01:59:51,647 - root - INFO - The train loss of epoch31-batch-150:0.10082323849201202 2018-12-13 02:03:55,833 - root - INFO - The train loss of epoch31-batch-200:0.06501290202140808 2018-12-13 02:08:00,956 - root - INFO - The train loss of epoch31-batch-250:0.07328531891107559 2018-12-13 02:12:04,233 - root - INFO - The train loss of epoch31-batch-300:0.10529260337352753 2018-12-13 02:16:09,439 - root - INFO - The train loss of epoch31-batch-350:0.1609112173318863 2018-12-13 02:17:47,518 - root - INFO - Epoch:31, train PA1:0.7633757556239752, MPA1:0.7231024369228547, MIoU1:0.4918316878356398, FWIoU1:0.6062515335935375 2018-12-13 02:17:47,526 - root - INFO - validating one epoch... 2018-12-13 02:20:55,178 - root - INFO - Epoch:31, validation PA1:0.7868489986012913, MPA1:0.5933794889562525, MIoU1:0.4110678487403895, FWIoU1:0.6415575351914204 2018-12-13 02:20:55,178 - root - INFO - The average loss of val loss:0.18950664576503537 2018-12-13 02:20:55,185 - root - INFO - => The MIoU of val does't improve. 2018-12-13 02:20:55,186 - root - INFO - Training one epoch... 2018-12-13 02:21:01,850 - root - INFO - The train loss of epoch32-batch-0:0.08909622579813004 2018-12-13 02:25:05,507 - root - INFO - The train loss of epoch32-batch-50:0.19405704736709595 2018-12-13 02:29:10,971 - root - INFO - The train loss of epoch32-batch-100:0.11128056794404984 2018-12-13 02:33:14,841 - root - INFO - The train loss of epoch32-batch-150:0.10030394792556763 2018-12-13 02:37:18,149 - root - INFO - The train loss of epoch32-batch-200:0.08156133443117142 2018-12-13 02:41:22,610 - root - INFO - The train loss of epoch32-batch-250:0.08098991960287094 2018-12-13 02:45:27,055 - root - INFO - The train loss of epoch32-batch-300:0.30669453740119934 2018-12-13 02:49:31,432 - root - INFO - The train loss of epoch32-batch-350:0.09179168194532394 2018-12-13 02:51:13,966 - root - INFO - Epoch:32, train PA1:0.7592028326647203, MPA1:0.7019948668544101, MIoU1:0.46853176193430307, FWIoU1:0.5968736099928121 2018-12-13 02:51:13,977 - root - INFO - validating one epoch... 2018-12-13 02:54:25,164 - root - INFO - Epoch:32, validation PA1:0.7877828168554513, MPA1:0.6138687707382328, MIoU1:0.4005972533741251, FWIoU1:0.6444155809245936 2018-12-13 02:54:25,165 - root - INFO - The average loss of val loss:0.20383236443083133 2018-12-13 02:54:25,169 - root - INFO - => The MIoU of val does't improve. 2018-12-13 02:54:25,170 - root - INFO - Training one epoch... 2018-12-13 02:54:31,735 - root - INFO - The train loss of epoch33-batch-0:0.09146948158740997 2018-12-13 02:58:35,983 - root - INFO - The train loss of epoch33-batch-50:0.15613922476768494 2018-12-13 03:02:39,973 - root - INFO - The train loss of epoch33-batch-100:0.09745081514120102 2018-12-13 03:06:45,088 - root - INFO - The train loss of epoch33-batch-150:0.12676535546779633 2018-12-13 03:10:49,716 - root - INFO - The train loss of epoch33-batch-200:0.06565270572900772 2018-12-13 03:14:53,301 - root - INFO - The train loss of epoch33-batch-250:0.05686870962381363 2018-12-13 03:18:57,801 - root - INFO - The train loss of epoch33-batch-300:0.08924976736307144 2018-12-13 03:23:01,997 - root - INFO - The train loss of epoch33-batch-350:0.1005791574716568 2018-12-13 03:24:40,196 - root - INFO - Epoch:33, train PA1:0.7603554912327714, MPA1:0.726555425256963, MIoU1:0.4793176623302456, FWIoU1:0.6002246876470434 2018-12-13 03:24:40,201 - root - INFO - validating one epoch... 2018-12-13 03:27:46,987 - root - INFO - Epoch:33, validation PA1:0.7905831837606131, MPA1:0.6096896342510281, MIoU1:0.425527824780987, FWIoU1:0.6445216519124074 2018-12-13 03:27:46,987 - root - INFO - The average loss of val loss:0.1909269129316653 2018-12-13 03:27:46,991 - root - INFO - =>saving a new best checkpoint... 2018-12-13 03:27:51,594 - root - INFO - Training one epoch... 2018-12-13 03:27:58,517 - root - INFO - The train loss of epoch34-batch-0:0.1252835988998413 2018-12-13 03:32:03,624 - root - INFO - The train loss of epoch34-batch-50:0.06838086247444153 2018-12-13 03:36:07,759 - root - INFO - The train loss of epoch34-batch-100:0.09283308684825897 2018-12-13 03:40:12,969 - root - INFO - The train loss of epoch34-batch-150:0.3515942692756653 2018-12-13 03:44:17,932 - root - INFO - The train loss of epoch34-batch-200:0.06689026951789856 2018-12-13 03:48:21,889 - root - INFO - The train loss of epoch34-batch-250:0.118089459836483 2018-12-13 03:52:25,686 - root - INFO - The train loss of epoch34-batch-300:0.0706668272614479 2018-12-13 03:56:30,564 - root - INFO - The train loss of epoch34-batch-350:0.09203720092773438 2018-12-13 03:58:08,221 - root - INFO - Epoch:34, train PA1:0.7590510298389936, MPA1:0.7191797548295644, MIoU1:0.4818790174555172, FWIoU1:0.5966671087528118 2018-12-13 03:58:08,232 - root - INFO - validating one epoch... 2018-12-13 04:01:17,737 - root - INFO - Epoch:34, validation PA1:0.7888597572956615, MPA1:0.6069508144273013, MIoU1:0.413230860895997, FWIoU1:0.6447568616988048 2018-12-13 04:01:17,739 - root - INFO - The average loss of val loss:0.20094195128448547 2018-12-13 04:01:17,748 - root - INFO - => The MIoU of val does't improve. 2018-12-13 04:01:17,749 - root - INFO - Training one epoch... 2018-12-13 04:01:25,768 - root - INFO - The train loss of epoch35-batch-0:0.058432091027498245 2018-12-13 04:05:33,524 - root - INFO - The train loss of epoch35-batch-50:0.05479559302330017 2018-12-13 04:09:40,405 - root - INFO - The train loss of epoch35-batch-100:0.10426641255617142 2018-12-13 04:13:47,397 - root - INFO - The train loss of epoch35-batch-150:0.15399488806724548 2018-12-13 04:17:54,860 - root - INFO - The train loss of epoch35-batch-200:0.08918598294258118 2018-12-13 04:22:03,296 - root - INFO - The train loss of epoch35-batch-250:0.11428970843553543 2018-12-13 04:26:11,267 - root - INFO - The train loss of epoch35-batch-300:0.11709781736135483 2018-12-13 04:30:19,333 - root - INFO - The train loss of epoch35-batch-350:0.09811946749687195 2018-12-13 04:31:58,172 - root - INFO - Epoch:35, train PA1:0.7586309439951877, MPA1:0.7282873730305036, MIoU1:0.4894714923290103, FWIoU1:0.5968687032972647 2018-12-13 04:31:58,179 - root - INFO - validating one epoch... 2018-12-13 04:35:18,892 - root - INFO - Epoch:35, validation PA1:0.7869939057926594, MPA1:0.6190760118164705, MIoU1:0.413107119467396, FWIoU1:0.6436110307095102 2018-12-13 04:35:18,893 - root - INFO - The average loss of val loss:0.23366238495274896 2018-12-13 04:35:18,898 - root - INFO - => The MIoU of val does't improve. 2018-12-13 04:35:18,898 - root - INFO - Training one epoch... 2018-12-13 04:35:25,279 - root - INFO - The train loss of epoch36-batch-0:0.11224012076854706 2018-12-13 04:39:33,901 - root - INFO - The train loss of epoch36-batch-50:0.09018805623054504 2018-12-13 04:43:41,688 - root - INFO - The train loss of epoch36-batch-100:0.07063515484333038 2018-12-13 04:47:49,794 - root - INFO - The train loss of epoch36-batch-150:0.13391682505607605 2018-12-13 04:51:56,860 - root - INFO - The train loss of epoch36-batch-200:0.06668245792388916 2018-12-13 04:56:04,493 - root - INFO - The train loss of epoch36-batch-250:0.11251861602067947 2018-12-13 05:00:11,584 - root - INFO - The train loss of epoch36-batch-300:0.0891912430524826 2018-12-13 05:04:20,556 - root - INFO - The train loss of epoch36-batch-350:0.0844300240278244 2018-12-13 05:05:59,524 - root - INFO - Epoch:36, train PA1:0.7611209848049534, MPA1:0.7242168255875103, MIoU1:0.48641512114690394, FWIoU1:0.6027525383234554 2018-12-13 05:05:59,533 - root - INFO - validating one epoch... 2018-12-13 05:09:25,068 - root - INFO - Epoch:36, validation PA1:0.7842105845391389, MPA1:0.6004877115466444, MIoU1:0.4031407528369161, FWIoU1:0.6421834502054153 2018-12-13 05:09:25,069 - root - INFO - The average loss of val loss:0.23963072793858667 2018-12-13 05:09:25,076 - root - INFO - => The MIoU of val does't improve. 2018-12-13 05:09:25,077 - root - INFO - Training one epoch... 2018-12-13 05:09:31,490 - root - INFO - The train loss of epoch37-batch-0:0.0974629670381546 2018-12-13 05:13:39,440 - root - INFO - The train loss of epoch37-batch-50:0.08684888482093811 2018-12-13 05:17:46,900 - root - INFO - The train loss of epoch37-batch-100:0.06392570585012436 2018-12-13 05:21:55,923 - root - INFO - The train loss of epoch37-batch-150:0.10237018018960953 2018-12-13 05:26:04,961 - root - INFO - The train loss of epoch37-batch-200:0.10871351510286331 2018-12-13 05:30:13,294 - root - INFO - The train loss of epoch37-batch-250:0.08914442360401154 2018-12-13 05:34:23,142 - root - INFO - The train loss of epoch37-batch-300:0.11258074641227722 2018-12-13 05:38:31,573 - root - INFO - The train loss of epoch37-batch-350:0.09702528268098831 2018-12-13 05:40:10,040 - root - INFO - Epoch:37, train PA1:0.7625008553480014, MPA1:0.7307254243794433, MIoU1:0.4857234545991971, FWIoU1:0.603439666071959 2018-12-13 05:40:10,047 - root - INFO - validating one epoch... 2018-12-13 05:43:33,077 - root - INFO - Epoch:37, validation PA1:0.7891367395586228, MPA1:0.5962717116886461, MIoU1:0.40787953210094735, FWIoU1:0.6436470859046121 2018-12-13 05:43:33,078 - root - INFO - The average loss of val loss:0.23209152160392654 2018-12-13 05:43:33,083 - root - INFO - => The MIoU of val does't improve. 2018-12-13 05:43:33,083 - root - INFO - Training one epoch... 2018-12-13 05:43:41,020 - root - INFO - The train loss of epoch38-batch-0:0.05754300206899643 2018-12-13 05:47:49,283 - root - INFO - The train loss of epoch38-batch-50:0.1697925180196762 2018-12-13 05:51:57,113 - root - INFO - The train loss of epoch38-batch-100:0.06021856144070625 2018-12-13 05:56:05,551 - root - INFO - The train loss of epoch38-batch-150:0.1076103150844574 2018-12-13 06:00:13,487 - root - INFO - The train loss of epoch38-batch-200:0.11885322630405426 2018-12-13 06:04:21,796 - root - INFO - The train loss of epoch38-batch-250:0.08751001209020615 2018-12-13 06:08:31,604 - root - INFO - The train loss of epoch38-batch-300:0.08314283192157745 2018-12-13 06:12:42,115 - root - INFO - The train loss of epoch38-batch-350:0.11199519783258438 2018-12-13 06:14:26,596 - root - INFO - Epoch:38, train PA1:0.761580362117344, MPA1:0.7272089933718435, MIoU1:0.48998329532555857, FWIoU1:0.6020056627868344 2018-12-13 06:14:26,602 - root - INFO - validating one epoch... 2018-12-13 06:17:44,765 - root - INFO - Epoch:38, validation PA1:0.7877786492893871, MPA1:0.6110247608488756, MIoU1:0.4049445982040001, FWIoU1:0.6440993513657467 2018-12-13 06:17:44,765 - root - INFO - The average loss of val loss:0.2265539308709483 2018-12-13 06:17:44,772 - root - INFO - => The MIoU of val does't improve. 2018-12-13 06:17:44,773 - root - INFO - Training one epoch... 2018-12-13 06:17:51,367 - root - INFO - The train loss of epoch39-batch-0:0.08878537267446518 2018-12-13 06:21:59,802 - root - INFO - The train loss of epoch39-batch-50:0.11398157477378845 2018-12-13 06:26:11,337 - root - INFO - The train loss of epoch39-batch-100:0.21351900696754456 2018-12-13 06:30:18,699 - root - INFO - The train loss of epoch39-batch-150:0.07762157917022705 2018-12-13 06:34:27,065 - root - INFO - The train loss of epoch39-batch-200:0.0890546590089798 2018-12-13 06:38:36,227 - root - INFO - The train loss of epoch39-batch-250:0.07960240542888641 2018-12-13 06:42:43,560 - root - INFO - The train loss of epoch39-batch-300:0.07270653545856476 2018-12-13 06:46:50,351 - root - INFO - The train loss of epoch39-batch-350:0.06888214498758316 2018-12-13 06:48:29,650 - root - INFO - Epoch:39, train PA1:0.7636406292845817, MPA1:0.745107885367557, MIoU1:0.500096481187595, FWIoU1:0.6035562397528608 2018-12-13 06:48:29,658 - root - INFO - validating one epoch... 2018-12-13 06:51:49,822 - root - INFO - Epoch:39, validation PA1:0.7836019513524917, MPA1:0.5888089606827795, MIoU1:0.4054689337211118, FWIoU1:0.6391402046780372 2018-12-13 06:51:49,822 - root - INFO - The average loss of val loss:0.27084826233406223 2018-12-13 06:51:49,827 - root - INFO - => The MIoU of val does't improve. 2018-12-13 06:51:49,828 - root - INFO - Training one epoch... 2018-12-13 06:51:56,143 - root - INFO - The train loss of epoch40-batch-0:0.14961934089660645 2018-12-13 06:56:04,869 - root - INFO - The train loss of epoch40-batch-50:0.1105085238814354 2018-12-13 07:00:13,426 - root - INFO - The train loss of epoch40-batch-100:0.10944120585918427 2018-12-13 07:04:22,695 - root - INFO - The train loss of epoch40-batch-150:0.10640708357095718 2018-12-13 07:08:29,578 - root - INFO - The train loss of epoch40-batch-200:0.11519312858581543 2018-12-13 07:12:37,249 - root - INFO - The train loss of epoch40-batch-250:0.09451918303966522 2018-12-13 07:16:45,025 - root - INFO - The train loss of epoch40-batch-300:0.09446697682142258 2018-12-13 07:20:58,276 - root - INFO - The train loss of epoch40-batch-350:0.05840426683425903 2018-12-13 07:22:37,602 - root - INFO - Epoch:40, train PA1:0.7634595710222754, MPA1:0.7421854873994257, MIoU1:0.500523008419121, FWIoU1:0.6050221380241115 2018-12-13 07:22:37,609 - root - INFO - validating one epoch... 2018-12-13 07:25:58,075 - root - INFO - Epoch:40, validation PA1:0.792024686638869, MPA1:0.6117139080490825, MIoU1:0.4279848791787745, FWIoU1:0.6497026968060693 2018-12-13 07:25:58,076 - root - INFO - The average loss of val loss:0.18667432527628638 2018-12-13 07:25:58,083 - root - INFO - =>saving a new best checkpoint... 2018-12-13 07:26:02,759 - root - INFO - Training one epoch... 2018-12-13 07:26:09,746 - root - INFO - The train loss of epoch41-batch-0:0.07182825356721878 2018-12-13 07:30:17,083 - root - INFO - The train loss of epoch41-batch-50:0.0900864377617836 2018-12-13 07:34:24,272 - root - INFO - The train loss of epoch41-batch-100:0.10324622690677643 2018-12-13 07:38:32,536 - root - INFO - The train loss of epoch41-batch-150:0.09626263380050659 2018-12-13 07:42:41,487 - root - INFO - The train loss of epoch41-batch-200:0.096018947660923 2018-12-13 07:46:48,602 - root - INFO - The train loss of epoch41-batch-250:0.06980307400226593 2018-12-13 07:50:57,875 - root - INFO - The train loss of epoch41-batch-300:0.1313357651233673 2018-12-13 07:55:05,892 - root - INFO - The train loss of epoch41-batch-350:0.1167391911149025 2018-12-13 07:56:46,028 - root - INFO - Epoch:41, train PA1:0.7642487367760843, MPA1:0.7336170327012848, MIoU1:0.49044057628375254, FWIoU1:0.6066396037254913 2018-12-13 07:56:46,039 - root - INFO - validating one epoch... 2018-12-13 08:00:09,793 - root - INFO - Epoch:41, validation PA1:0.7887656346547337, MPA1:0.609426788966735, MIoU1:0.40264175931170626, FWIoU1:0.6458279829912521 2018-12-13 08:00:09,794 - root - INFO - The average loss of val loss:0.23181935303634213 2018-12-13 08:00:09,800 - root - INFO - => The MIoU of val does't improve. 2018-12-13 08:00:09,801 - root - INFO - Training one epoch... 2018-12-13 08:00:16,201 - root - INFO - The train loss of epoch42-batch-0:0.09086135774850845 2018-12-13 08:04:24,804 - root - INFO - The train loss of epoch42-batch-50:0.07962010055780411 2018-12-13 08:08:34,129 - root - INFO - The train loss of epoch42-batch-100:0.102066271007061 2018-12-13 08:12:43,319 - root - INFO - The train loss of epoch42-batch-150:0.07957116514444351 2018-12-13 08:16:51,578 - root - INFO - The train loss of epoch42-batch-200:0.06523235142230988 2018-12-13 08:21:00,705 - root - INFO - The train loss of epoch42-batch-250:0.08593732118606567 2018-12-13 08:25:13,743 - root - INFO - The train loss of epoch42-batch-300:0.07518326491117477 2018-12-13 08:29:22,770 - root - INFO - The train loss of epoch42-batch-350:0.11657022684812546 2018-12-13 08:31:01,763 - root - INFO - Epoch:42, train PA1:0.7629835937365491, MPA1:0.7322612302729808, MIoU1:0.4915011027313921, FWIoU1:0.6044228357425653 2018-12-13 08:31:01,772 - root - INFO - validating one epoch... 2018-12-13 08:34:20,134 - root - INFO - Epoch:42, validation PA1:0.786529605224853, MPA1:0.6071560476386008, MIoU1:0.392841119739244, FWIoU1:0.6435668422693033 2018-12-13 08:34:20,134 - root - INFO - The average loss of val loss:0.25309544423174474 2018-12-13 08:34:20,142 - root - INFO - => The MIoU of val does't improve. 2018-12-13 08:34:20,143 - root - INFO - Training one epoch... 2018-12-13 08:34:26,490 - root - INFO - The train loss of epoch43-batch-0:0.06695976108312607 2018-12-13 08:38:33,533 - root - INFO - The train loss of epoch43-batch-50:0.08046311140060425 2018-12-13 08:42:40,764 - root - INFO - The train loss of epoch43-batch-100:0.1516781896352768 2018-12-13 08:46:49,530 - root - INFO - The train loss of epoch43-batch-150:0.11532702296972275 2018-12-13 08:50:58,285 - root - INFO - The train loss of epoch43-batch-200:0.0809221863746643 2018-12-13 08:55:06,196 - root - INFO - The train loss of epoch43-batch-250:0.08626250922679901 2018-12-13 08:59:14,222 - root - INFO - The train loss of epoch43-batch-300:0.09724728018045425 2018-12-13 09:03:23,574 - root - INFO - The train loss of epoch43-batch-350:0.11514583230018616 2018-12-13 09:05:03,285 - root - INFO - Epoch:43, train PA1:0.7647872385796101, MPA1:0.7453247911225206, MIoU1:0.513256966662789, FWIoU1:0.6111427203870756 2018-12-13 09:05:03,293 - root - INFO - validating one epoch... 2018-12-13 09:08:20,009 - root - INFO - Epoch:43, validation PA1:0.7881272845769779, MPA1:0.6124983767024527, MIoU1:0.40359783228611246, FWIoU1:0.6461973382844439 2018-12-13 09:08:20,010 - root - INFO - The average loss of val loss:0.25392490174741517 2018-12-13 09:08:20,014 - root - INFO - => The MIoU of val does't improve. 2018-12-13 09:08:20,015 - root - INFO - Training one epoch... 2018-12-13 09:08:26,346 - root - INFO - The train loss of epoch44-batch-0:0.09242602437734604 2018-12-13 09:12:35,597 - root - INFO - The train loss of epoch44-batch-50:0.08607573807239532 2018-12-13 09:16:40,672 - root - INFO - The train loss of epoch44-batch-100:0.0845031589269638 2018-12-13 09:20:43,227 - root - INFO - The train loss of epoch44-batch-150:0.08731956779956818 2018-12-13 09:24:46,093 - root - INFO - The train loss of epoch44-batch-200:0.07192166149616241 2018-12-13 09:28:49,658 - root - INFO - The train loss of epoch44-batch-250:0.07437323778867722 2018-12-13 09:32:56,829 - root - INFO - The train loss of epoch44-batch-300:0.08996757864952087 2018-12-13 09:36:59,766 - root - INFO - The train loss of epoch44-batch-350:0.11031394451856613 2018-12-13 09:38:36,634 - root - INFO - Epoch:44, train PA1:0.7639028937561045, MPA1:0.7494246917917493, MIoU1:0.49610633801537657, FWIoU1:0.6046186564992333 2018-12-13 09:38:36,640 - root - INFO - validating one epoch... 2018-12-13 09:41:42,231 - root - INFO - Epoch:44, validation PA1:0.7865603793294117, MPA1:0.6185493639705236, MIoU1:0.4075368863055194, FWIoU1:0.6465071359022225 2018-12-13 09:41:42,232 - root - INFO - The average loss of val loss:0.2424937772654718 2018-12-13 09:41:42,239 - root - INFO - => The MIoU of val does't improve. 2018-12-13 09:41:42,244 - root - INFO - Training one epoch... 2018-12-13 09:41:48,622 - root - INFO - The train loss of epoch45-batch-0:0.12221523374319077 2018-12-13 09:45:53,007 - root - INFO - The train loss of epoch45-batch-50:0.07415090501308441 2018-12-13 09:49:56,426 - root - INFO - The train loss of epoch45-batch-100:0.09128270298242569 2018-12-13 09:53:59,543 - root - INFO - The train loss of epoch45-batch-150:0.08657120913267136 2018-12-13 09:58:02,259 - root - INFO - The train loss of epoch45-batch-200:0.1323844939470291 2018-12-13 10:02:04,974 - root - INFO - The train loss of epoch45-batch-250:0.09766232222318649 2018-12-13 10:06:07,219 - root - INFO - The train loss of epoch45-batch-300:0.13504284620285034 2018-12-13 10:10:10,326 - root - INFO - The train loss of epoch45-batch-350:0.11114377528429031 2018-12-13 10:11:47,557 - root - INFO - Epoch:45, train PA1:0.7634680348834293, MPA1:0.7602585079539486, MIoU1:0.5135538957076118, FWIoU1:0.60166315191457 2018-12-13 10:11:47,565 - root - INFO - validating one epoch... 2018-12-13 10:14:54,247 - root - INFO - Epoch:45, validation PA1:0.7900676389843445, MPA1:0.6079391896544324, MIoU1:0.41043524214524496, FWIoU1:0.6462776884404052 2018-12-13 10:14:54,248 - root - INFO - The average loss of val loss:0.23082405100426368 2018-12-13 10:14:54,255 - root - INFO - => The MIoU of val does't improve. 2018-12-13 10:14:54,256 - root - INFO - Training one epoch... 2018-12-13 10:15:00,706 - root - INFO - The train loss of epoch46-batch-0:0.08819007873535156 2018-12-13 10:19:02,804 - root - INFO - The train loss of epoch46-batch-50:0.08991778641939163 2018-12-13 10:23:05,884 - root - INFO - The train loss of epoch46-batch-100:0.368539035320282 2018-12-13 10:27:08,876 - root - INFO - The train loss of epoch46-batch-150:0.15019547939300537 2018-12-13 10:31:11,397 - root - INFO - The train loss of epoch46-batch-200:0.06965800374746323 2018-12-13 10:35:15,101 - root - INFO - The train loss of epoch46-batch-250:0.09647795557975769 2018-12-13 10:39:18,862 - root - INFO - The train loss of epoch46-batch-300:0.12352285534143448 2018-12-13 10:43:25,040 - root - INFO - The train loss of epoch46-batch-350:0.170549675822258 2018-12-13 10:45:02,078 - root - INFO - Epoch:46, train PA1:0.7646099980807225, MPA1:0.7510846450684276, MIoU1:0.5152109911592628, FWIoU1:0.6066729097322371 2018-12-13 10:45:02,087 - root - INFO - validating one epoch... 2018-12-13 10:48:07,033 - root - INFO - Epoch:46, validation PA1:0.7944212899376937, MPA1:0.6197883916033057, MIoU1:0.41038670877437067, FWIoU1:0.6518566663054343 2018-12-13 10:48:07,034 - root - INFO - The average loss of val loss:0.18016206485129171 2018-12-13 10:48:07,039 - root - INFO - => The MIoU of val does't improve. 2018-12-13 10:48:07,040 - root - INFO - Training one epoch... 2018-12-13 10:48:13,381 - root - INFO - The train loss of epoch47-batch-0:0.09048359841108322 2018-12-13 10:52:17,215 - root - INFO - The train loss of epoch47-batch-50:0.05842157453298569 2018-12-13 10:56:20,156 - root - INFO - The train loss of epoch47-batch-100:0.11364786326885223 2018-12-13 11:00:22,800 - root - INFO - The train loss of epoch47-batch-150:0.09763438999652863 2018-12-13 11:04:25,994 - root - INFO - The train loss of epoch47-batch-200:0.0978146344423294 2018-12-13 11:08:29,128 - root - INFO - The train loss of epoch47-batch-250:0.1439596265554428 2018-12-13 11:12:32,393 - root - INFO - The train loss of epoch47-batch-300:0.3302207291126251 2018-12-13 11:16:35,405 - root - INFO - The train loss of epoch47-batch-350:0.09564121812582016 2018-12-13 11:18:13,194 - root - INFO - Epoch:47, train PA1:0.7589314834013735, MPA1:0.7529034432378411, MIoU1:0.5020009957511828, FWIoU1:0.5974306346612821 2018-12-13 11:18:13,200 - root - INFO - validating one epoch... 2018-12-13 11:21:18,453 - root - INFO - Epoch:47, validation PA1:0.7905504407875278, MPA1:0.6164692569948129, MIoU1:0.415324616541905, FWIoU1:0.6475561801487475 2018-12-13 11:21:18,454 - root - INFO - The average loss of val loss:0.20610209500357027 2018-12-13 11:21:18,458 - root - INFO - => The MIoU of val does't improve. 2018-12-13 11:21:18,459 - root - INFO - Training one epoch... 2018-12-13 11:21:24,756 - root - INFO - The train loss of epoch48-batch-0:0.06992433965206146 2018-12-13 11:25:27,452 - root - INFO - The train loss of epoch48-batch-50:0.10963446646928787 2018-12-13 11:29:30,785 - root - INFO - The train loss of epoch48-batch-100:0.0682460367679596 2018-12-13 11:33:33,179 - root - INFO - The train loss of epoch48-batch-150:0.07839752733707428 2018-12-13 11:37:35,694 - root - INFO - The train loss of epoch48-batch-200:0.08701352775096893 2018-12-13 11:41:38,634 - root - INFO - The train loss of epoch48-batch-250:0.047389544546604156 2018-12-13 11:45:42,242 - root - INFO - The train loss of epoch48-batch-300:0.13758881390094757 2018-12-13 11:49:48,619 - root - INFO - The train loss of epoch48-batch-350:0.216863214969635 2018-12-13 11:51:26,199 - root - INFO - Epoch:48, train PA1:0.7632894667451503, MPA1:0.7530050564707703, MIoU1:0.5107846232425504, FWIoU1:0.6021396370298391 2018-12-13 11:51:26,204 - root - INFO - validating one epoch... 2018-12-13 11:54:32,561 - root - INFO - Epoch:48, validation PA1:0.7897825514182656, MPA1:0.615886360238975, MIoU1:0.40625331620577115, FWIoU1:0.6488580692862379 2018-12-13 11:54:32,561 - root - INFO - The average loss of val loss:0.21670056260641543 2018-12-13 11:54:32,567 - root - INFO - => The MIoU of val does't improve. 2018-12-13 11:54:32,568 - root - INFO - Training one epoch... 2018-12-13 11:54:39,040 - root - INFO - The train loss of epoch49-batch-0:0.09870622307062149 2018-12-13 11:58:41,705 - root - INFO - The train loss of epoch49-batch-50:0.1003650650382042 2018-12-13 12:02:44,744 - root - INFO - The train loss of epoch49-batch-100:0.10221607983112335 2018-12-13 12:06:47,867 - root - INFO - The train loss of epoch49-batch-150:0.08770084381103516 2018-12-13 12:10:50,914 - root - INFO - The train loss of epoch49-batch-200:0.0888170599937439 2018-12-13 12:14:54,751 - root - INFO - The train loss of epoch49-batch-250:0.072165347635746 2018-12-13 12:18:58,020 - root - INFO - The train loss of epoch49-batch-300:0.0868273377418518 2018-12-13 12:23:00,690 - root - INFO - The train loss of epoch49-batch-350:0.12323672324419022 2018-12-13 12:24:38,260 - root - INFO - Epoch:49, train PA1:0.7643465736851026, MPA1:0.7585873599933397, MIoU1:0.5086543787247052, FWIoU1:0.6036936398553548 2018-12-13 12:24:38,269 - root - INFO - validating one epoch... 2018-12-13 12:27:44,635 - root - INFO - Epoch:49, validation PA1:0.7862792678052212, MPA1:0.605554228283779, MIoU1:0.4065103798141063, FWIoU1:0.6416130499301467 2018-12-13 12:27:44,635 - root - INFO - The average loss of val loss:0.24078203599539497 2018-12-13 12:27:44,643 - root - INFO - => The MIoU of val does't improve. 2018-12-13 12:27:44,644 - root - INFO - Training one epoch... 2018-12-13 12:27:51,001 - root - INFO - The train loss of epoch50-batch-0:0.10168762505054474 2018-12-13 12:31:53,342 - root - INFO - The train loss of epoch50-batch-50:0.08640103042125702 2018-12-13 12:35:55,306 - root - INFO - The train loss of epoch50-batch-100:0.12076159566640854 2018-12-13 12:39:58,250 - root - INFO - The train loss of epoch50-batch-150:0.10104615241289139 2018-12-13 12:44:00,867 - root - INFO - The train loss of epoch50-batch-200:0.09130750596523285 2018-12-13 12:48:03,478 - root - INFO - The train loss of epoch50-batch-250:0.11974288523197174 2018-12-13 12:52:07,174 - root - INFO - The train loss of epoch50-batch-300:0.07772654294967651 2018-12-13 12:56:14,347 - root - INFO - The train loss of epoch50-batch-350:0.06739121675491333 2018-12-13 12:57:51,697 - root - INFO - Epoch:50, train PA1:0.7627064755791834, MPA1:0.7547737701204056, MIoU1:0.49865186570969106, FWIoU1:0.601159193428398 2018-12-13 12:57:51,707 - root - INFO - validating one epoch... 2018-12-13 13:00:57,023 - root - INFO - Epoch:50, validation PA1:0.7866060463538761, MPA1:0.6040379839535234, MIoU1:0.41146138702121504, FWIoU1:0.6430915964607555 2018-12-13 13:00:57,024 - root - INFO - The average loss of val loss:0.28228418955639484 2018-12-13 13:00:57,030 - root - INFO - => The MIoU of val does't improve. 2018-12-13 13:00:57,031 - root - INFO - Training one epoch... 2018-12-13 13:01:03,568 - root - INFO - The train loss of epoch51-batch-0:0.06169849634170532 2018-12-13 13:05:06,762 - root - INFO - The train loss of epoch51-batch-50:0.1084810420870781 2018-12-13 13:09:10,470 - root - INFO - The train loss of epoch51-batch-100:0.06736481934785843 2018-12-13 13:13:13,419 - root - INFO - The train loss of epoch51-batch-150:0.057852309197187424 2018-12-13 13:17:16,367 - root - INFO - The train loss of epoch51-batch-200:0.09428554773330688 2018-12-13 13:21:19,415 - root - INFO - The train loss of epoch51-batch-250:0.1921323537826538 2018-12-13 13:25:22,255 - root - INFO - The train loss of epoch51-batch-300:0.07636352628469467 2018-12-13 13:29:25,287 - root - INFO - The train loss of epoch51-batch-350:0.07374560087919235 2018-12-13 13:31:02,881 - root - INFO - Epoch:51, train PA1:0.7640495319749797, MPA1:0.7612897287143707, MIoU1:0.5114479681219085, FWIoU1:0.6042251061558799 2018-12-13 13:31:02,889 - root - INFO - validating one epoch... 2018-12-13 13:34:08,908 - root - INFO - Epoch:51, validation PA1:0.7881480534585956, MPA1:0.6171226596030677, MIoU1:0.40570197684972464, FWIoU1:0.6461738841764393 2018-12-13 13:34:08,909 - root - INFO - The average loss of val loss:0.23899343940279177 2018-12-13 13:34:08,914 - root - INFO - => The MIoU of val does't improve. 2018-12-13 13:34:08,915 - root - INFO - Training one epoch... 2018-12-13 13:34:15,114 - root - INFO - The train loss of epoch52-batch-0:0.0782838761806488 2018-12-13 13:38:17,877 - root - INFO - The train loss of epoch52-batch-50:0.09082192927598953 2018-12-13 13:42:21,235 - root - INFO - The train loss of epoch52-batch-100:0.08561979234218597 2018-12-13 13:46:25,754 - root - INFO - The train loss of epoch52-batch-150:0.13238553702831268 2018-12-13 13:50:28,740 - root - INFO - The train loss of epoch52-batch-200:0.11884132027626038 2018-12-13 13:54:32,030 - root - INFO - The train loss of epoch52-batch-250:0.04790753498673439 2018-12-13 13:58:34,995 - root - INFO - The train loss of epoch52-batch-300:0.18667228519916534 2018-12-13 14:02:40,590 - root - INFO - The train loss of epoch52-batch-350:0.06831136345863342 2018-12-13 14:04:19,390 - root - INFO - Epoch:52, train PA1:0.7632236122465821, MPA1:0.7580736925751327, MIoU1:0.514129407954411, FWIoU1:0.602198227842077 2018-12-13 14:04:19,398 - root - INFO - validating one epoch... 2018-12-13 14:07:25,251 - root - INFO - Epoch:52, validation PA1:0.789895864781603, MPA1:0.6281942177903, MIoU1:0.4166184104259062, FWIoU1:0.6476028807921927 2018-12-13 14:07:25,251 - root - INFO - The average loss of val loss:0.2121198303516834 2018-12-13 14:07:25,257 - root - INFO - => The MIoU of val does't improve. 2018-12-13 14:07:25,264 - root - INFO - Training one epoch... 2018-12-13 14:07:32,603 - root - INFO - The train loss of epoch53-batch-0:0.0687919482588768 2018-12-13 14:11:35,641 - root - INFO - The train loss of epoch53-batch-50:0.05776502192020416 2018-12-13 14:15:38,678 - root - INFO - The train loss of epoch53-batch-100:0.0746932104229927 2018-12-13 14:19:41,194 - root - INFO - The train loss of epoch53-batch-150:0.07240303605794907 2018-12-13 14:23:44,340 - root - INFO - The train loss of epoch53-batch-200:0.07603862136602402 2018-12-13 14:27:47,306 - root - INFO - The train loss of epoch53-batch-250:0.07820028066635132 2018-12-13 14:31:49,848 - root - INFO - The train loss of epoch53-batch-300:0.10506778210401535 2018-12-13 14:35:52,818 - root - INFO - The train loss of epoch53-batch-350:0.06679998338222504 2018-12-13 14:37:29,873 - root - INFO - Epoch:53, train PA1:0.7634309480387906, MPA1:0.7740422407871075, MIoU1:0.5200862727081316, FWIoU1:0.6015239855971286 2018-12-13 14:37:29,879 - root - INFO - validating one epoch... 2018-12-13 14:40:35,839 - root - INFO - Epoch:53, validation PA1:0.7899091718813337, MPA1:0.6201096786791629, MIoU1:0.40505243548731046, FWIoU1:0.6493028364950444 2018-12-13 14:40:35,839 - root - INFO - The average loss of val loss:0.22121838534310942 2018-12-13 14:40:35,843 - root - INFO - => The MIoU of val does't improve. 2018-12-13 14:40:35,844 - root - INFO - Training one epoch... 2018-12-13 14:40:42,263 - root - INFO - The train loss of epoch54-batch-0:0.1005401462316513 2018-12-13 14:44:44,672 - root - INFO - The train loss of epoch54-batch-50:0.0726105272769928 2018-12-13 14:48:47,438 - root - INFO - The train loss of epoch54-batch-100:0.05904143303632736 2018-12-13 14:52:50,378 - root - INFO - The train loss of epoch54-batch-150:0.06941134482622147 2018-12-13 14:56:53,568 - root - INFO - The train loss of epoch54-batch-200:0.08863436430692673 2018-12-13 15:00:56,946 - root - INFO - The train loss of epoch54-batch-250:0.09899318218231201 2018-12-13 15:05:00,469 - root - INFO - The train loss of epoch54-batch-300:0.0745646134018898 2018-12-13 15:09:03,386 - root - INFO - The train loss of epoch54-batch-350:0.13170114159584045 2018-12-13 15:10:45,106 - root - INFO - Epoch:54, train PA1:0.7610078589184884, MPA1:0.7740177984067671, MIoU1:0.5185653845742482, FWIoU1:0.5974627895054293 2018-12-13 15:10:45,114 - root - INFO - validating one epoch... 2018-12-13 15:13:50,406 - root - INFO - Epoch:54, validation PA1:0.7897420938513814, MPA1:0.6181021592736958, MIoU1:0.4143790048354276, FWIoU1:0.6462432001855347 2018-12-13 15:13:50,407 - root - INFO - The average loss of val loss:0.22271988967493658 2018-12-13 15:13:50,414 - root - INFO - => The MIoU of val does't improve. 2018-12-13 15:13:50,416 - root - INFO - Training one epoch... 2018-12-13 15:13:56,829 - root - INFO - The train loss of epoch55-batch-0:0.04741481691598892 2018-12-13 15:17:59,838 - root - INFO - The train loss of epoch55-batch-50:0.05225010961294174 2018-12-13 15:22:02,942 - root - INFO - The train loss of epoch55-batch-100:0.09171963483095169 2018-12-13 15:26:06,344 - root - INFO - The train loss of epoch55-batch-150:0.06425483524799347 2018-12-13 15:30:08,632 - root - INFO - The train loss of epoch55-batch-200:0.10209448635578156 2018-12-13 15:34:11,427 - root - INFO - The train loss of epoch55-batch-250:0.07587062567472458 2018-12-13 15:38:15,333 - root - INFO - The train loss of epoch55-batch-300:0.10363408178091049 2018-12-13 15:42:18,721 - root - INFO - The train loss of epoch55-batch-350:0.0898434966802597 2018-12-13 15:43:56,103 - root - INFO - Epoch:55, train PA1:0.7692309154769302, MPA1:0.7631196191725609, MIoU1:0.515387040113829, FWIoU1:0.6115667455535314 2018-12-13 15:43:56,110 - root - INFO - validating one epoch... 2018-12-13 15:47:01,904 - root - INFO - Epoch:55, validation PA1:0.7891652689998417, MPA1:0.623396865648651, MIoU1:0.4154131677000919, FWIoU1:0.6463894340032799 2018-12-13 15:47:01,904 - root - INFO - The average loss of val loss:0.22222941392852413 2018-12-13 15:47:01,911 - root - INFO - => The MIoU of val does't improve. 2018-12-13 15:47:01,912 - root - INFO - Training one epoch... 2018-12-13 15:47:08,380 - root - INFO - The train loss of epoch56-batch-0:0.1205512061715126 2018-12-13 15:51:12,679 - root - INFO - The train loss of epoch56-batch-50:0.0677928626537323 2018-12-13 15:55:16,112 - root - INFO - The train loss of epoch56-batch-100:0.11961822211742401 2018-12-13 15:59:19,072 - root - INFO - The train loss of epoch56-batch-150:0.08341684192419052 2018-12-13 16:03:22,248 - root - INFO - The train loss of epoch56-batch-200:0.0561501570045948 2018-12-13 16:07:25,744 - root - INFO - The train loss of epoch56-batch-250:0.06939854472875595 2018-12-13 16:11:28,211 - root - INFO - The train loss of epoch56-batch-300:0.13303986191749573 2018-12-13 16:15:31,478 - root - INFO - The train loss of epoch56-batch-350:0.0932515487074852 2018-12-13 16:17:10,105 - root - INFO - Epoch:56, train PA1:0.7647786799785189, MPA1:0.7742268482682045, MIoU1:0.5253929381852884, FWIoU1:0.6048085578551314 2018-12-13 16:17:10,113 - root - INFO - validating one epoch... 2018-12-13 16:20:21,107 - root - INFO - Epoch:56, validation PA1:0.7907665336845365, MPA1:0.6259385059250647, MIoU1:0.4109590778898233, FWIoU1:0.6480727798871477 2018-12-13 16:20:21,107 - root - INFO - The average loss of val loss:0.21985313194172998 2018-12-13 16:20:21,114 - root - INFO - => The MIoU of val does't improve. 2018-12-13 16:20:21,115 - root - INFO - Training one epoch... 2018-12-13 16:20:27,441 - root - INFO - The train loss of epoch57-batch-0:0.10275351256132126 2018-12-13 16:24:30,520 - root - INFO - The train loss of epoch57-batch-50:0.07060400396585464 2018-12-13 16:28:33,583 - root - INFO - The train loss of epoch57-batch-100:0.1276392936706543 2018-12-13 16:32:36,285 - root - INFO - The train loss of epoch57-batch-150:0.08753172308206558 2018-12-13 16:36:40,723 - root - INFO - The train loss of epoch57-batch-200:0.07852430641651154 2018-12-13 16:40:47,872 - root - INFO - The train loss of epoch57-batch-250:0.09536261856555939 2018-12-13 16:44:50,888 - root - INFO - The train loss of epoch57-batch-300:0.07011724263429642 2018-12-13 16:48:53,916 - root - INFO - The train loss of epoch57-batch-350:0.07784625887870789 2018-12-13 16:50:31,369 - root - INFO - Epoch:57, train PA1:0.7609191400882991, MPA1:0.7712121848037696, MIoU1:0.5162486710959046, FWIoU1:0.5977952558153503 2018-12-13 16:50:31,382 - root - INFO - validating one epoch... 2018-12-13 16:53:37,165 - root - INFO - Epoch:57, validation PA1:0.7893775160914016, MPA1:0.6259623995481484, MIoU1:0.4194035650288884, FWIoU1:0.6472761850229698 2018-12-13 16:53:37,166 - root - INFO - The average loss of val loss:0.2260051191213631 2018-12-13 16:53:37,175 - root - INFO - => The MIoU of val does't improve. 2018-12-13 16:53:37,176 - root - INFO - Training one epoch... 2018-12-13 16:53:43,571 - root - INFO - The train loss of epoch58-batch-0:0.07160462439060211 2018-12-13 16:57:46,880 - root - INFO - The train loss of epoch58-batch-50:0.11006855964660645 2018-12-13 17:01:49,524 - root - INFO - The train loss of epoch58-batch-100:0.06691480427980423 2018-12-13 17:05:52,485 - root - INFO - The train loss of epoch58-batch-150:0.08883199840784073 2018-12-13 17:09:55,679 - root - INFO - The train loss of epoch58-batch-200:0.0703299343585968 2018-12-13 17:13:58,042 - root - INFO - The train loss of epoch58-batch-250:0.11311787366867065 2018-12-13 17:18:00,588 - root - INFO - The train loss of epoch58-batch-300:0.09246222674846649 2018-12-13 17:22:03,402 - root - INFO - The train loss of epoch58-batch-350:0.09537742286920547 2018-12-13 17:23:40,984 - root - INFO - Epoch:58, train PA1:0.7677211001480104, MPA1:0.7756353328948566, MIoU1:0.5355115137428919, FWIoU1:0.6076068994469996 2018-12-13 17:23:40,991 - root - INFO - validating one epoch... 2018-12-13 17:26:53,337 - root - INFO - Epoch:58, validation PA1:0.7897650537695696, MPA1:0.6152501529885999, MIoU1:0.40963261901577813, FWIoU1:0.6472262419808911 2018-12-13 17:26:53,338 - root - INFO - The average loss of val loss:0.2409130653306361 2018-12-13 17:26:53,343 - root - INFO - => The MIoU of val does't improve. 2018-12-13 17:26:53,344 - root - INFO - Training one epoch... 2018-12-13 17:26:59,625 - root - INFO - The train loss of epoch59-batch-0:0.0744636207818985 2018-12-13 17:31:04,215 - root - INFO - The train loss of epoch59-batch-50:0.07220152020454407 2018-12-13 17:35:07,498 - root - INFO - The train loss of epoch59-batch-100:0.06494809687137604 2018-12-13 17:39:10,444 - root - INFO - The train loss of epoch59-batch-150:0.08366767317056656 2018-12-13 17:43:13,979 - root - INFO - The train loss of epoch59-batch-200:0.08364100754261017 2018-12-13 17:47:17,541 - root - INFO - The train loss of epoch59-batch-250:0.09772937744855881 2018-12-13 17:51:20,269 - root - INFO - The train loss of epoch59-batch-300:0.09265224635601044 2018-12-13 17:55:23,369 - root - INFO - The train loss of epoch59-batch-350:0.10160361975431442 2018-12-13 17:57:00,741 - root - INFO - Epoch:59, train PA1:0.7646409562756343, MPA1:0.7741443915777195, MIoU1:0.5148740285561884, FWIoU1:0.6046510050723185 2018-12-13 17:57:00,747 - root - INFO - validating one epoch... 2018-12-13 18:00:06,099 - root - INFO - Epoch:59, validation PA1:0.7887978490210938, MPA1:0.6228833745982363, MIoU1:0.4123514087857463, FWIoU1:0.6465809413994127 2018-12-13 18:00:06,099 - root - INFO - The average loss of val loss:0.2203185179781529 2018-12-13 18:00:06,105 - root - INFO - => The MIoU of val does't improve. 2018-12-13 18:00:06,105 - root - INFO - Training one epoch... 2018-12-13 18:00:12,219 - root - INFO - The train loss of epoch60-batch-0:0.06337448209524155 2018-12-13 18:04:15,620 - root - INFO - The train loss of epoch60-batch-50:0.08434600383043289 2018-12-13 18:08:18,810 - root - INFO - The train loss of epoch60-batch-100:0.07001485675573349 2018-12-13 18:12:21,974 - root - INFO - The train loss of epoch60-batch-150:0.10654978454113007 2018-12-13 18:16:24,850 - root - INFO - The train loss of epoch60-batch-200:0.0678277239203453 2018-12-13 18:20:28,257 - root - INFO - The train loss of epoch60-batch-250:0.0834929496049881 2018-12-13 18:24:30,686 - root - INFO - The train loss of epoch60-batch-300:0.0874016061425209 2018-12-13 18:28:33,465 - root - INFO - The train loss of epoch60-batch-350:0.06826363503932953 2018-12-13 18:30:10,769 - root - INFO - Epoch:60, train PA1:0.7635870807357051, MPA1:0.7725256283206763, MIoU1:0.5340521756882475, FWIoU1:0.599869077134983 2018-12-13 18:30:10,778 - root - INFO - validating one epoch... 2018-12-13 18:33:23,639 - root - INFO - Epoch:60, validation PA1:0.7909293751997214, MPA1:0.6239016580163486, MIoU1:0.41413805860620273, FWIoU1:0.6483175977199813 2018-12-13 18:33:23,640 - root - INFO - The average loss of val loss:0.22689539428439834 2018-12-13 18:33:23,648 - root - INFO - => The MIoU of val does't improve. 2018-12-13 18:33:23,661 - root - INFO - Training one epoch... 2018-12-13 18:33:30,339 - root - INFO - The train loss of epoch61-batch-0:0.08251336961984634 2018-12-13 18:37:32,866 - root - INFO - The train loss of epoch61-batch-50:0.10391673445701599 2018-12-13 18:41:35,614 - root - INFO - The train loss of epoch61-batch-100:0.07274757325649261 2018-12-13 18:45:39,182 - root - INFO - The train loss of epoch61-batch-150:0.11237944662570953 2018-12-13 18:49:42,032 - root - INFO - The train loss of epoch61-batch-200:0.08107034862041473 2018-12-13 18:53:45,188 - root - INFO - The train loss of epoch61-batch-250:0.07663043588399887 2018-12-13 18:57:48,083 - root - INFO - The train loss of epoch61-batch-300:0.10500387102365494 2018-12-13 19:01:51,030 - root - INFO - The train loss of epoch61-batch-350:0.22583593428134918 2018-12-13 19:03:28,271 - root - INFO - Epoch:61, train PA1:0.7622546134483044, MPA1:0.7898873642889642, MIoU1:0.5452208817975497, FWIoU1:0.5980414604740799 2018-12-13 19:03:28,278 - root - INFO - validating one epoch... 2018-12-13 19:06:33,203 - root - INFO - Epoch:61, validation PA1:0.7891364867467109, MPA1:0.6152954451996149, MIoU1:0.41688540482208875, FWIoU1:0.6455122770073866 2018-12-13 19:06:33,204 - root - INFO - The average loss of val loss:0.24023926474394336 2018-12-13 19:06:33,210 - root - INFO - => The MIoU of val does't improve. 2018-12-13 19:06:33,210 - root - INFO - Training one epoch... 2018-12-13 19:06:39,513 - root - INFO - The train loss of epoch62-batch-0:0.09184783697128296 2018-12-13 19:10:43,180 - root - INFO - The train loss of epoch62-batch-50:0.11436110734939575 2018-12-13 19:14:45,774 - root - INFO - The train loss of epoch62-batch-100:0.12125315517187119 2018-12-13 19:18:48,647 - root - INFO - The train loss of epoch62-batch-150:0.08708321303129196 2018-12-13 19:22:52,242 - root - INFO - The train loss of epoch62-batch-200:0.09532661736011505 2018-12-13 19:26:55,219 - root - INFO - The train loss of epoch62-batch-250:0.08076009154319763 2018-12-13 19:30:57,705 - root - INFO - The train loss of epoch62-batch-300:0.06428422033786774 2018-12-13 19:35:00,791 - root - INFO - The train loss of epoch62-batch-350:0.06612666696310043 2018-12-13 19:36:38,439 - root - INFO - Epoch:62, train PA1:0.7624629811376676, MPA1:0.7786950208451211, MIoU1:0.5169693991812865, FWIoU1:0.5990914121764414 2018-12-13 19:36:38,448 - root - INFO - validating one epoch... 2018-12-13 19:39:47,046 - root - INFO - Epoch:62, validation PA1:0.7910379464244673, MPA1:0.6171567309216681, MIoU1:0.4079296166597658, FWIoU1:0.6484545284868615 2018-12-13 19:39:47,047 - root - INFO - The average loss of val loss:0.23169328692939975 2018-12-13 19:39:47,055 - root - INFO - => The MIoU of val does't improve. 2018-12-13 19:39:47,067 - root - INFO - Training one epoch... 2018-12-13 19:39:54,520 - root - INFO - The train loss of epoch63-batch-0:0.08553973585367203 2018-12-13 19:44:00,385 - root - INFO - The train loss of epoch63-batch-50:0.08568034321069717 2018-12-13 19:48:06,262 - root - INFO - The train loss of epoch63-batch-100:0.1169237494468689 2018-12-13 19:52:14,040 - root - INFO - The train loss of epoch63-batch-150:0.05895553529262543 2018-12-13 19:56:21,791 - root - INFO - The train loss of epoch63-batch-200:0.10644489526748657 2018-12-13 20:00:30,376 - root - INFO - The train loss of epoch63-batch-250:0.06645834445953369 2018-12-13 20:04:37,676 - root - INFO - The train loss of epoch63-batch-300:0.0742335170507431 2018-12-13 20:08:46,666 - root - INFO - The train loss of epoch63-batch-350:0.07012823224067688 2018-12-13 20:10:27,227 - root - INFO - Epoch:63, train PA1:0.762079880997216, MPA1:0.7823474807022117, MIoU1:0.5220633123560562, FWIoU1:0.5995759401364736 2018-12-13 20:10:27,233 - root - INFO - validating one epoch... 2018-12-13 20:13:45,685 - root - INFO - Epoch:63, validation PA1:0.7905107646392806, MPA1:0.6239882569442742, MIoU1:0.41573718946410876, FWIoU1:0.6476574575406202 2018-12-13 20:13:45,685 - root - INFO - The average loss of val loss:0.23045483976602554 2018-12-13 20:13:45,693 - root - INFO - => The MIoU of val does't improve. 2018-12-13 20:13:45,695 - root - INFO - Training one epoch... 2018-12-13 20:13:51,990 - root - INFO - The train loss of epoch64-batch-0:0.09852486848831177 2018-12-13 20:17:59,816 - root - INFO - The train loss of epoch64-batch-50:0.05095476657152176 2018-12-13 20:22:08,260 - root - INFO - The train loss of epoch64-batch-100:0.08385670185089111 2018-12-13 20:26:16,122 - root - INFO - The train loss of epoch64-batch-150:0.08266631513834 2018-12-13 20:30:23,212 - root - INFO - The train loss of epoch64-batch-200:0.05973867326974869 2018-12-13 20:34:31,587 - root - INFO - The train loss of epoch64-batch-250:0.09944719076156616 2018-12-13 20:38:39,945 - root - INFO - The train loss of epoch64-batch-300:0.08625195175409317 2018-12-13 20:42:48,890 - root - INFO - The train loss of epoch64-batch-350:0.08041573315858841 2018-12-13 20:44:29,116 - root - INFO - Epoch:64, train PA1:0.7654096249159849, MPA1:0.7801140070816641, MIoU1:0.5208298614837534, FWIoU1:0.6032049040401949 2018-12-13 20:44:29,122 - root - INFO - validating one epoch... 2018-12-13 20:47:51,410 - root - INFO - Epoch:64, validation PA1:0.7897287790906836, MPA1:0.6249734053819598, MIoU1:0.4158604094038799, FWIoU1:0.6473440130003218 2018-12-13 20:47:51,411 - root - INFO - The average loss of val loss:0.25469921985941546 2018-12-13 20:47:51,418 - root - INFO - => The MIoU of val does't improve. 2018-12-13 20:47:51,429 - root - INFO - Training one epoch... 2018-12-13 20:48:00,527 - root - INFO - The train loss of epoch65-batch-0:0.08319487422704697 2018-12-13 20:52:09,768 - root - INFO - The train loss of epoch65-batch-50:0.08606398850679398 2018-12-13 20:56:17,097 - root - INFO - The train loss of epoch65-batch-100:0.07053034752607346 2018-12-13 21:00:26,883 - root - INFO - The train loss of epoch65-batch-150:0.0672755166888237 2018-12-13 21:04:33,938 - root - INFO - The train loss of epoch65-batch-200:0.08614799380302429 2018-12-13 21:08:40,746 - root - INFO - The train loss of epoch65-batch-250:0.0852571502327919 2018-12-13 21:12:50,987 - root - INFO - The train loss of epoch65-batch-300:0.11889611184597015 2018-12-13 21:16:58,346 - root - INFO - The train loss of epoch65-batch-350:0.061035338789224625 2018-12-13 21:18:38,023 - root - INFO - Epoch:65, train PA1:0.7682602868956774, MPA1:0.7825871912416922, MIoU1:0.5327009380506124, FWIoU1:0.6073953067701677 2018-12-13 21:18:38,029 - root - INFO - validating one epoch... 2018-12-13 21:21:56,975 - root - INFO - Epoch:65, validation PA1:0.7907102715426699, MPA1:0.6272150774145094, MIoU1:0.41498198420732296, FWIoU1:0.647995774838205 2018-12-13 21:21:56,975 - root - INFO - The average loss of val loss:0.22475364125303685 2018-12-13 21:21:56,983 - root - INFO - => The MIoU of val does't improve. 2018-12-13 21:21:56,984 - root - INFO - Training one epoch... 2018-12-13 21:22:03,334 - root - INFO - The train loss of epoch66-batch-0:0.07528220117092133 2018-12-13 21:26:09,984 - root - INFO - The train loss of epoch66-batch-50:0.0806317999958992 2018-12-13 21:30:17,617 - root - INFO - The train loss of epoch66-batch-100:0.09214980155229568 2018-12-13 21:34:26,894 - root - INFO - The train loss of epoch66-batch-150:0.08732631802558899 2018-12-13 21:38:35,172 - root - INFO - The train loss of epoch66-batch-200:0.09642141312360764 2018-12-13 21:42:43,475 - root - INFO - The train loss of epoch66-batch-250:0.06169944256544113 2018-12-13 21:46:52,035 - root - INFO - The train loss of epoch66-batch-300:0.06549649685621262 2018-12-13 21:51:00,378 - root - INFO - The train loss of epoch66-batch-350:0.1254327893257141 2018-12-13 21:52:42,989 - root - INFO - Epoch:66, train PA1:0.7678831668936994, MPA1:0.7822112874766756, MIoU1:0.5269287108215811, FWIoU1:0.6063021127745597 2018-12-13 21:52:42,997 - root - INFO - validating one epoch... 2018-12-13 21:56:05,719 - root - INFO - Epoch:66, validation PA1:0.7896994912137288, MPA1:0.6226886319123316, MIoU1:0.41402213053899006, FWIoU1:0.6473515177975074 2018-12-13 21:56:05,720 - root - INFO - The average loss of val loss:0.24658172998216846 2018-12-13 21:56:05,727 - root - INFO - => The MIoU of val does't improve. 2018-12-13 21:56:05,728 - root - INFO - Training one epoch... 2018-12-13 21:56:12,073 - root - INFO - The train loss of epoch67-batch-0:0.1051299124956131 2018-12-13 22:00:19,052 - root - INFO - The train loss of epoch67-batch-50:0.06767421960830688 2018-12-13 22:04:26,791 - root - INFO - The train loss of epoch67-batch-100:0.0718829557299614 2018-12-13 22:08:35,068 - root - INFO - The train loss of epoch67-batch-150:0.07090459018945694 2018-12-13 22:12:41,373 - root - INFO - The train loss of epoch67-batch-200:0.09438274800777435 2018-12-13 22:16:48,655 - root - INFO - The train loss of epoch67-batch-250:0.08194092661142349 2018-12-13 22:20:57,183 - root - INFO - The train loss of epoch67-batch-300:0.07344082742929459 2018-12-13 22:25:04,266 - root - INFO - The train loss of epoch67-batch-350:0.07594878226518631 2018-12-13 22:26:44,085 - root - INFO - Epoch:67, train PA1:0.7702291159404717, MPA1:0.796208979184917, MIoU1:0.5565734261897932, FWIoU1:0.6108422705971961 2018-12-13 22:26:44,094 - root - INFO - validating one epoch... 2018-12-13 22:30:04,897 - root - INFO - Epoch:67, validation PA1:0.7899002008889419, MPA1:0.6287718611505536, MIoU1:0.419693421374722, FWIoU1:0.6481927579899972 2018-12-13 22:30:04,898 - root - INFO - The average loss of val loss:0.25184370222831926 2018-12-13 22:30:04,905 - root - INFO - => The MIoU of val does't improve. 2018-12-13 22:30:04,906 - root - INFO - Training one epoch... 2018-12-13 22:30:11,301 - root - INFO - The train loss of epoch68-batch-0:0.06758487969636917 2018-12-13 22:34:19,512 - root - INFO - The train loss of epoch68-batch-50:0.08597459644079208 2018-12-13 22:38:27,407 - root - INFO - The train loss of epoch68-batch-100:0.07099548727273941 2018-12-13 22:42:36,235 - root - INFO - The train loss of epoch68-batch-150:0.11633608490228653 2018-12-13 22:46:43,655 - root - INFO - The train loss of epoch68-batch-200:0.08217640221118927 2018-12-13 22:50:51,968 - root - INFO - The train loss of epoch68-batch-250:0.07681643217802048 2018-12-13 22:54:59,881 - root - INFO - The train loss of epoch68-batch-300:0.06516711413860321 2018-12-13 22:59:11,901 - root - INFO - The train loss of epoch68-batch-350:0.09911930561065674 2018-12-13 23:00:52,539 - root - INFO - Epoch:68, train PA1:0.7643209785388027, MPA1:0.7909276136078042, MIoU1:0.5418279494104448, FWIoU1:0.6017257649312583 2018-12-13 23:00:52,545 - root - INFO - validating one epoch... 2018-12-13 23:04:11,056 - root - INFO - Epoch:68, validation PA1:0.7901078667222179, MPA1:0.6267957332301328, MIoU1:0.41342796955837463, FWIoU1:0.6490185243944938 2018-12-13 23:04:11,057 - root - INFO - The average loss of val loss:0.25829273792764834 2018-12-13 23:04:11,063 - root - INFO - => The MIoU of val does't improve. 2018-12-13 23:04:11,064 - root - INFO - Training one epoch... 2018-12-13 23:04:17,305 - root - INFO - The train loss of epoch69-batch-0:0.10203228145837784 2018-12-13 23:08:24,720 - root - INFO - The train loss of epoch69-batch-50:0.07123924791812897 2018-12-13 23:12:33,432 - root - INFO - The train loss of epoch69-batch-100:0.07737849652767181 2018-12-13 23:16:40,662 - root - INFO - The train loss of epoch69-batch-150:0.05919199436903 2018-12-13 23:20:48,535 - root - INFO - The train loss of epoch69-batch-200:0.08492010831832886 2018-12-13 23:24:56,959 - root - INFO - The train loss of epoch69-batch-250:0.08474017679691315 2018-12-13 23:29:05,416 - root - INFO - The train loss of epoch69-batch-300:0.07042057067155838 2018-12-13 23:33:14,063 - root - INFO - The train loss of epoch69-batch-350:0.13659970462322235 2018-12-13 23:34:53,349 - root - INFO - Epoch:69, train PA1:0.7678240965428025, MPA1:0.7873688345588803, MIoU1:0.5327121983829061, FWIoU1:0.6072800282751639 2018-12-13 23:34:53,356 - root - INFO - validating one epoch... 2018-12-13 23:38:11,405 - root - INFO - Epoch:69, validation PA1:0.7904548242580293, MPA1:0.6254524905624455, MIoU1:0.4148655501626906, FWIoU1:0.6478575012259936 2018-12-13 23:38:11,406 - root - INFO - The average loss of val loss:0.26267245838478687 2018-12-13 23:38:11,412 - root - INFO - => The MIoU of val does't improve. 2018-12-13 23:38:11,413 - root - INFO - Training one epoch... 2018-12-13 23:38:17,749 - root - INFO - The train loss of epoch70-batch-0:0.06137042120099068 2018-12-13 23:42:25,034 - root - INFO - The train loss of epoch70-batch-50:0.07089225947856903 2018-12-13 23:46:32,864 - root - INFO - The train loss of epoch70-batch-100:0.09390285611152649 2018-12-13 23:50:40,475 - root - INFO - The train loss of epoch70-batch-150:0.07700946182012558 2018-12-13 23:54:50,090 - root - INFO - The train loss of epoch70-batch-200:0.08167420327663422 2018-12-13 23:58:59,025 - root - INFO - The train loss of epoch70-batch-250:0.09366551041603088 2018-12-14 00:03:06,342 - root - INFO - The train loss of epoch70-batch-300:0.05736307427287102 2018-12-14 00:07:19,055 - root - INFO - The train loss of epoch70-batch-350:0.06388724595308304 2018-12-14 00:08:58,886 - root - INFO - Epoch:70, train PA1:0.7620357449891356, MPA1:0.7842927596360051, MIoU1:0.5257075771489359, FWIoU1:0.5992724383053318 2018-12-14 00:08:58,893 - root - INFO - validating one epoch... 2018-12-14 00:12:21,479 - root - INFO - Epoch:70, validation PA1:0.7901585899849219, MPA1:0.6226830240761158, MIoU1:0.41585806583946977, FWIoU1:0.6481184815598074 2018-12-14 00:12:21,479 - root - INFO - The average loss of val loss:0.24616649392391404 2018-12-14 00:12:21,487 - root - INFO - => The MIoU of val does't improve. 2018-12-14 00:12:21,488 - root - INFO - Training one epoch... 2018-12-14 00:12:27,977 - root - INFO - The train loss of epoch71-batch-0:0.06438329815864563 2018-12-14 00:16:34,381 - root - INFO - The train loss of epoch71-batch-50:0.09304267913103104 2018-12-14 00:20:42,554 - root - INFO - The train loss of epoch71-batch-100:0.06460950523614883 2018-12-14 00:24:50,330 - root - INFO - The train loss of epoch71-batch-150:0.07936233282089233 2018-12-14 00:28:58,675 - root - INFO - The train loss of epoch71-batch-200:0.07330217957496643 2018-12-14 00:33:05,263 - root - INFO - The train loss of epoch71-batch-250:0.07731010764837265 2018-12-14 00:37:13,013 - root - INFO - The train loss of epoch71-batch-300:0.08413219451904297 2018-12-14 00:41:20,347 - root - INFO - The train loss of epoch71-batch-350:0.22739818692207336 2018-12-14 00:42:59,510 - root - INFO - Epoch:71, train PA1:0.7607791439071891, MPA1:0.7883119185646114, MIoU1:0.5307382668004608, FWIoU1:0.5971343200305351 2018-12-14 00:42:59,519 - root - INFO - validating one epoch... 2018-12-14 00:46:23,733 - root - INFO - Epoch:71, validation PA1:0.7917014704398829, MPA1:0.6312652533463857, MIoU1:0.4172286805616815, FWIoU1:0.6503679371820665 2018-12-14 00:46:23,734 - root - INFO - The average loss of val loss:0.21791852295639053 2018-12-14 00:46:23,739 - root - INFO - => The MIoU of val does't improve. 2018-12-14 00:46:23,740 - root - INFO - Training one epoch... 2018-12-14 00:46:30,064 - root - INFO - The train loss of epoch72-batch-0:0.09353304654359818 2018-12-14 00:50:36,324 - root - INFO - The train loss of epoch72-batch-50:0.08358445018529892 2018-12-14 00:54:45,082 - root - INFO - The train loss of epoch72-batch-100:0.07374483346939087 2018-12-14 00:58:53,153 - root - INFO - The train loss of epoch72-batch-150:0.10037104040384293 2018-12-14 01:03:00,145 - root - INFO - The train loss of epoch72-batch-200:0.10257931798696518 2018-12-14 01:07:08,738 - root - INFO - The train loss of epoch72-batch-250:0.0834176167845726 2018-12-14 01:11:22,273 - root - INFO - The train loss of epoch72-batch-300:0.09660014510154724 2018-12-14 01:15:31,514 - root - INFO - The train loss of epoch72-batch-350:0.06575053185224533 2018-12-14 01:17:11,049 - root - INFO - Epoch:72, train PA1:0.7682250449192769, MPA1:0.7930387129117598, MIoU1:0.5274839777259858, FWIoU1:0.6081188555849596 2018-12-14 01:17:11,056 - root - INFO - validating one epoch... 2018-12-14 01:20:33,448 - root - INFO - Epoch:72, validation PA1:0.7914134334014975, MPA1:0.6324275220881757, MIoU1:0.4161404983511696, FWIoU1:0.6511594160900951 2018-12-14 01:20:33,448 - root - INFO - The average loss of val loss:0.2211807995073257 2018-12-14 01:20:33,454 - root - INFO - => The MIoU of val does't improve. 2018-12-14 01:20:33,455 - root - INFO - Training one epoch... 2018-12-14 01:20:39,933 - root - INFO - The train loss of epoch73-batch-0:0.08717765659093857 2018-12-14 01:24:48,266 - root - INFO - The train loss of epoch73-batch-50:0.08627194166183472 2018-12-14 01:28:56,144 - root - INFO - The train loss of epoch73-batch-100:0.07482896745204926 2018-12-14 01:33:04,884 - root - INFO - The train loss of epoch73-batch-150:0.08533979207277298 2018-12-14 01:37:14,268 - root - INFO - The train loss of epoch73-batch-200:0.06209447607398033 2018-12-14 01:41:23,322 - root - INFO - The train loss of epoch73-batch-250:0.05119028314948082 2018-12-14 01:45:31,409 - root - INFO - The train loss of epoch73-batch-300:0.059949517250061035 2018-12-14 01:49:39,908 - root - INFO - The train loss of epoch73-batch-350:0.09179580956697464 2018-12-14 01:51:19,191 - root - INFO - Epoch:73, train PA1:0.7675282890124031, MPA1:0.7975030050019293, MIoU1:0.5428111648987721, FWIoU1:0.6075637701284892 2018-12-14 01:51:19,200 - root - INFO - validating one epoch... 2018-12-14 01:54:38,264 - root - INFO - Epoch:73, validation PA1:0.7915930217906094, MPA1:0.6335376775770339, MIoU1:0.4133923794194, FWIoU1:0.650395485677975 2018-12-14 01:54:38,265 - root - INFO - The average loss of val loss:0.22174503443942917 2018-12-14 01:54:38,270 - root - INFO - => The MIoU of val does't improve. 2018-12-14 01:54:38,271 - root - INFO - Training one epoch... 2018-12-14 01:54:44,838 - root - INFO - The train loss of epoch74-batch-0:0.09974899142980576 2018-12-14 01:58:52,408 - root - INFO - The train loss of epoch74-batch-50:0.07895980030298233 2018-12-14 02:03:01,865 - root - INFO - The train loss of epoch74-batch-100:0.06876935064792633 2018-12-14 02:07:10,570 - root - INFO - The train loss of epoch74-batch-150:0.05675329640507698 2018-12-14 02:11:17,739 - root - INFO - The train loss of epoch74-batch-200:0.12472642213106155 2018-12-14 02:15:27,569 - root - INFO - The train loss of epoch74-batch-250:0.07525446265935898 2018-12-14 02:19:39,934 - root - INFO - The train loss of epoch74-batch-300:0.08585639297962189 2018-12-14 02:23:48,125 - root - INFO - The train loss of epoch74-batch-350:0.0764705091714859 2018-12-14 02:25:27,171 - root - INFO - Epoch:74, train PA1:0.7638158200721235, MPA1:0.797840985113268, MIoU1:0.5294849985365205, FWIoU1:0.6023205505145666 2018-12-14 02:25:27,186 - root - INFO - validating one epoch... 2018-12-14 02:28:45,158 - root - INFO - Epoch:74, validation PA1:0.7920613060612713, MPA1:0.6307754133176114, MIoU1:0.4139843530526113, FWIoU1:0.6497765931580093 2018-12-14 02:28:45,159 - root - INFO - The average loss of val loss:0.22273401723754022 2018-12-14 02:28:45,166 - root - INFO - => The MIoU of val does't improve. 2018-12-14 02:28:45,167 - root - INFO - Training one epoch... 2018-12-14 02:28:51,973 - root - INFO - The train loss of epoch75-batch-0:0.0791291892528534 2018-12-14 02:33:01,156 - root - INFO - The train loss of epoch75-batch-50:0.10244867950677872 2018-12-14 02:37:08,439 - root - INFO - The train loss of epoch75-batch-100:0.06208749860525131 2018-12-14 02:41:17,088 - root - INFO - The train loss of epoch75-batch-150:0.07784713059663773 2018-12-14 02:45:25,530 - root - INFO - The train loss of epoch75-batch-200:0.07275502383708954 2018-12-14 02:49:38,819 - root - INFO - The train loss of epoch75-batch-250:0.11422920972108841 2018-12-14 02:53:45,749 - root - INFO - The train loss of epoch75-batch-300:0.11381519585847855 2018-12-14 02:57:53,207 - root - INFO - The train loss of epoch75-batch-350:0.08443800359964371 2018-12-14 02:59:31,667 - root - INFO - Epoch:75, train PA1:0.763034941502288, MPA1:0.7968421227249032, MIoU1:0.5255440794157734, FWIoU1:0.6001476229551184 2018-12-14 02:59:31,682 - root - INFO - validating one epoch... 2018-12-14 03:02:49,343 - root - INFO - Epoch:75, validation PA1:0.7919248795604051, MPA1:0.6348118519083673, MIoU1:0.41474245174615715, FWIoU1:0.6509873941129619 2018-12-14 03:02:49,344 - root - INFO - The average loss of val loss:0.21480117690178654 2018-12-14 03:02:49,350 - root - INFO - => The MIoU of val does't improve. 2018-12-14 03:02:49,352 - root - INFO - Training one epoch... 2018-12-14 03:02:55,808 - root - INFO - The train loss of epoch76-batch-0:0.08766405284404755 2018-12-14 03:07:03,469 - root - INFO - The train loss of epoch76-batch-50:0.08219460397958755 2018-12-14 03:11:10,318 - root - INFO - The train loss of epoch76-batch-100:0.07174519449472427 2018-12-14 03:15:17,667 - root - INFO - The train loss of epoch76-batch-150:0.09451166540384293 2018-12-14 03:19:26,083 - root - INFO - The train loss of epoch76-batch-200:0.06423836201429367 2018-12-14 03:23:35,248 - root - INFO - The train loss of epoch76-batch-250:0.07439189404249191 2018-12-14 03:27:48,817 - root - INFO - The train loss of epoch76-batch-300:0.07415063679218292 2018-12-14 03:31:57,511 - root - INFO - The train loss of epoch76-batch-350:0.10046972334384918 2018-12-14 03:33:37,467 - root - INFO - Epoch:76, train PA1:0.7677943802092306, MPA1:0.8027397525719664, MIoU1:0.5401830885050509, FWIoU1:0.6082893070179308 2018-12-14 03:33:37,483 - root - INFO - validating one epoch... 2018-12-14 03:37:01,761 - root - INFO - Epoch:76, validation PA1:0.7920782444593741, MPA1:0.6328907753067616, MIoU1:0.4199800855796735, FWIoU1:0.6507084283953427 2018-12-14 03:37:01,762 - root - INFO - The average loss of val loss:0.22073180294565617 2018-12-14 03:37:01,770 - root - INFO - => The MIoU of val does't improve. 2018-12-14 03:37:01,771 - root - INFO - Training one epoch... 2018-12-14 03:37:09,058 - root - INFO - The train loss of epoch77-batch-0:0.08227524161338806 2018-12-14 03:41:18,559 - root - INFO - The train loss of epoch77-batch-50:0.07898706942796707 2018-12-14 03:45:26,293 - root - INFO - The train loss of epoch77-batch-100:0.053116120398044586 2018-12-14 03:49:34,002 - root - INFO - The train loss of epoch77-batch-150:0.08511938154697418 2018-12-14 03:53:44,099 - root - INFO - The train loss of epoch77-batch-200:0.04978394880890846 2018-12-14 03:57:53,643 - root - INFO - The train loss of epoch77-batch-250:0.058944329619407654 2018-12-14 04:02:01,064 - root - INFO - The train loss of epoch77-batch-300:0.06549062579870224 2018-12-14 04:06:11,495 - root - INFO - The train loss of epoch77-batch-350:0.08315250277519226 2018-12-14 04:07:51,132 - root - INFO - Epoch:77, train PA1:0.7600297138754545, MPA1:0.7977944759453464, MIoU1:0.5288751101551519, FWIoU1:0.5963038057443125 2018-12-14 04:07:51,139 - root - INFO - validating one epoch... 2018-12-14 04:11:11,173 - root - INFO - Epoch:77, validation PA1:0.7913227888396013, MPA1:0.6323855516534461, MIoU1:0.41387757921707047, FWIoU1:0.6500691634428198 2018-12-14 04:11:11,174 - root - INFO - The average loss of val loss:0.21869239799918666 2018-12-14 04:11:11,181 - root - INFO - => The MIoU of val does't improve. 2018-12-14 04:11:11,183 - root - INFO - Training one epoch... 2018-12-14 04:11:17,845 - root - INFO - The train loss of epoch78-batch-0:0.0800504982471466 2018-12-14 04:15:26,535 - root - INFO - The train loss of epoch78-batch-50:0.08068972080945969 2018-12-14 04:19:34,599 - root - INFO - The train loss of epoch78-batch-100:0.0926692858338356 2018-12-14 04:23:41,926 - root - INFO - The train loss of epoch78-batch-150:0.05476093292236328 2018-12-14 04:27:49,913 - root - INFO - The train loss of epoch78-batch-200:0.0578206330537796 2018-12-14 04:32:01,517 - root - INFO - The train loss of epoch78-batch-250:0.07043195515871048 2018-12-14 04:36:13,395 - root - INFO - The train loss of epoch78-batch-300:0.06784471124410629 2018-12-14 04:40:22,367 - root - INFO - The train loss of epoch78-batch-350:0.06906118243932724 2018-12-14 04:42:01,540 - root - INFO - Epoch:78, train PA1:0.7669144971246634, MPA1:0.7947067708112829, MIoU1:0.5358499434399407, FWIoU1:0.6064964592233362 2018-12-14 04:42:01,547 - root - INFO - validating one epoch... 2018-12-14 04:45:20,432 - root - INFO - Epoch:78, validation PA1:0.792505343371284, MPA1:0.6314530910611406, MIoU1:0.4185810993388035, FWIoU1:0.6508439871447552 2018-12-14 04:45:20,432 - root - INFO - The average loss of val loss:0.2186225937499154 2018-12-14 04:45:20,438 - root - INFO - => The MIoU of val does't improve. 2018-12-14 04:45:20,439 - root - INFO - Training one epoch... 2018-12-14 04:45:26,945 - root - INFO - The train loss of epoch79-batch-0:0.1114247664809227 2018-12-14 04:49:34,446 - root - INFO - The train loss of epoch79-batch-50:0.14219745993614197 2018-12-14 04:53:41,943 - root - INFO - The train loss of epoch79-batch-100:0.06957314163446426 2018-12-14 04:57:49,602 - root - INFO - The train loss of epoch79-batch-150:0.0530533529818058 2018-12-14 05:01:57,119 - root - INFO - The train loss of epoch79-batch-200:0.08301427960395813 2018-12-14 05:06:03,679 - root - INFO - The train loss of epoch79-batch-250:0.08130020648241043 2018-12-14 05:10:11,611 - root - INFO - The train loss of epoch79-batch-300:0.0647607073187828 2018-12-14 05:14:19,684 - root - INFO - The train loss of epoch79-batch-350:0.05231017246842384 2018-12-14 05:15:59,807 - root - INFO - Epoch:79, train PA1:0.764338410687263, MPA1:0.7967679593778005, MIoU1:0.526968476005431, FWIoU1:0.6028187224790889 2018-12-14 05:15:59,813 - root - INFO - validating one epoch... 2018-12-14 05:19:17,869 - root - INFO - Epoch:79, validation PA1:0.7917265064801362, MPA1:0.6314447384259483, MIoU1:0.41553867080716883, FWIoU1:0.6503565430691121 2018-12-14 05:19:17,869 - root - INFO - The average loss of val loss:0.22075387240657884 2018-12-14 05:19:17,874 - root - INFO - => The MIoU of val does't improve. 2018-12-14 05:19:17,875 - root - INFO - Training one epoch... 2018-12-14 05:19:25,189 - root - INFO - The train loss of epoch80-batch-0:0.06983697414398193 2018-12-14 05:23:35,266 - root - INFO - The train loss of epoch80-batch-50:0.09156380593776703 2018-12-14 05:27:43,725 - root - INFO - The train loss of epoch80-batch-100:0.07980844378471375 2018-12-14 05:31:51,915 - root - INFO - The train loss of epoch80-batch-150:0.05648236721754074 2018-12-14 05:35:59,338 - root - INFO - The train loss of epoch80-batch-200:0.06080227345228195 2018-12-14 05:40:11,882 - root - INFO - The train loss of epoch80-batch-250:0.09029868245124817 2018-12-14 05:44:18,159 - root - INFO - The train loss of epoch80-batch-300:0.047231271862983704 2018-12-14 05:45:52,915 - root - INFO - iteration arrive 30000! 2018-12-14 05:45:53,006 - root - INFO - Epoch:80, train PA1:0.7677728074094023, MPA1:0.8028506846538848, MIoU1:0.5437916232782085, FWIoU1:0.6083327637309233 2018-12-14 05:45:53,015 - root - INFO - validating one epoch... 2018-12-14 05:49:10,157 - root - INFO - Epoch:80, validation PA1:0.7918806068319401, MPA1:0.6302434510780867, MIoU1:0.41529446691590366, FWIoU1:0.6507319497931728 2018-12-14 05:49:10,158 - root - INFO - The average loss of val loss:0.21744494428557734 2018-12-14 05:49:10,182 - root - INFO - => The MIoU of val does't improve. 2018-12-14 05:49:10,185 - root - INFO - =>saving the final checkpoint...
@Vipermdl this is my:
2018-12-19 05:29:12,675 - root - INFO - Epoch:80, train PA1:0.960404015835341, MPA1:0.8597161731054243, MIoU1:0.7849944881184294, FWIoU1:0.926682433172048
2018-12-19 05:29:12,686 - root - INFO - validating one epoch...
2018-12-19 05:29:44,044 - root - INFO - Epoch:80, validation PA1:0.9546706799777404, MPA1:0.7769070199055423, MIoU1:0.6781781201728705, FWIoU1:0.9184429904893292
@ranjiewwen can you please share the pre-trained weights and hyperparameters of your model trained with cityscapes dataset? Thanks in advance!!
@ranjiewwen hello!can you please share the pre-trained weights and hyperparameters of your model trained with cityscapes dataset? Thanks in advance!!
@vineetgarg93 @EchoAmor i have train the model three months ago. i check the train log as follow:
点击查看详细内容
2018-12-18 17:54:05,198 - root - INFO - Global configuration as follows:
2018-12-18 17:54:05,199 - root - INFO - weight_decay 4e-05
2018-12-18 17:54:05,199 - root - INFO - validation True
2018-12-18 17:54:05,199 - root - INFO - split train
2018-12-18 17:54:05,199 - root - INFO - pretrained_ckpt_file None
2018-12-18 17:54:05,199 - root - INFO - result_filepath /mnt/disk/home1/rjw/maskrcnn/Deeplab-v3plus/Results/
2018-12-18 17:54:05,199 - root - INFO - base_size 513
2018-12-18 17:54:05,200 - root - INFO - iter_max 40000
2018-12-18 17:54:05,200 - root - INFO - batch_size 8
2018-12-18 17:54:05,200 - root - INFO - pin_memory 2
2018-12-18 17:54:05,200 - root - INFO - data_loader_workers 16
2018-12-18 17:54:05,200 - root - INFO - loss_weights_dir /mnt/disk/home1/rjw/maskrcnn/Deeplab-v3plus/pretrained_weights/
2018-12-18 17:54:05,200 - root - INFO - store_result_mask True
2018-12-18 17:54:05,200 - root - INFO - poly_power 0.9
2018-12-18 17:54:05,200 - root - INFO - output_stride 16
2018-12-18 17:54:05,200 - root - INFO - backbone resnet101
2018-12-18 17:54:05,200 - root - INFO - loss_weight_file None
2018-12-18 17:54:05,201 - root - INFO - freeze_bn False
2018-12-18 17:54:05,201 - root - INFO - bn_momentum 0.1
2018-12-18 17:54:05,201 - root - INFO - nesterov True
2018-12-18 17:54:05,201 - root - INFO - checkpoint_dir /mnt/disk/home1/rjw/maskrcnn/Deeplab-v3plus/checkpoints/
2018-12-18 17:54:05,201 - root - INFO - dataset cityscapes
2018-12-18 17:54:05,201 - root - INFO - dampening 0
2018-12-18 17:54:05,201 - root - INFO - num_classes 21
2018-12-18 17:54:05,201 - root - INFO - crop_size 513
2018-12-18 17:54:05,201 - root - INFO - save_ckpt_file True
2018-12-18 17:54:05,201 - root - INFO - lr 0.007
2018-12-18 17:54:05,202 - root - INFO - batch_size_per_gpu 4
2018-12-18 17:54:05,202 - root - INFO - momentum 0.9
2018-12-18 17:54:05,202 - root - INFO - imagenet_pretrained True
2018-12-18 17:54:05,202 - root - INFO - data_root_path /mnt/disk/home1/datasets/cityscapes
2018-12-18 17:54:05,202 - root - INFO - gpu 0,1
2018-12-18 17:54:05,202 - root - INFO - This model will run on GeForce GTX 1080 Ti
2018-12-18 17:54:05,205 - root - INFO - Training one epoch...
2018-12-18 17:54:14,459 - root - INFO - The train loss of epoch0-batch-0:3.105213165283203
2018-12-18 17:55:17,890 - root - INFO - The train loss of epoch0-batch-50:0.3679661750793457
2018-12-18 17:56:23,709 - root - INFO - The train loss of epoch0-batch-100:0.5137077569961548
2018-12-18 17:57:28,788 - root - INFO - The train loss of epoch0-batch-150:0.8621986508369446
2018-12-18 17:58:33,621 - root - INFO - The train loss of epoch0-batch-200:0.42010799050331116
2018-12-18 17:59:38,109 - root - INFO - The train loss of epoch0-batch-250:0.34816911816596985
2018-12-18 18:00:43,186 - root - INFO - The train loss of epoch0-batch-300:0.35692310333251953
2018-12-18 18:01:47,728 - root - INFO - The train loss of epoch0-batch-350:0.34519797563552856
2018-12-18 18:02:13,404 - root - INFO - Epoch:0, train PA1:0.8471083398419181, MPA1:0.36815424255211526, MIoU1:0.2771556819501126, FWIoU1:0.7449929533099614
2018-12-18 18:02:13,418 - root - INFO - validating one epoch...
2018-12-18 18:02:46,024 - root - INFO - Epoch:0, validation PA1:0.9064367735362833, MPA1:0.4738395420075205, MIoU1:0.3759366199884784, FWIoU1:0.8443342271867751
2018-12-18 18:02:46,025 - root - INFO - The average loss of val loss:0.30870433946450554
2018-12-18 18:02:46,054 - root - INFO - =>saving a new best checkpoint...
2018-12-18 18:02:46,683 - root - INFO - Training one epoch...
2018-12-18 18:02:50,988 - root - INFO - The train loss of epoch1-batch-0:0.4898870885372162
2018-12-18 18:03:55,597 - root - INFO - The train loss of epoch1-batch-50:0.40356940031051636
2018-12-18 18:05:00,737 - root - INFO - The train loss of epoch1-batch-100:0.3153564929962158
2018-12-18 18:06:05,432 - root - INFO - The train loss of epoch1-batch-150:0.3442077338695526
2018-12-18 18:07:10,298 - root - INFO - The train loss of epoch1-batch-200:0.2106415331363678
2018-12-18 18:08:15,208 - root - INFO - The train loss of epoch1-batch-250:0.3209742605686188
2018-12-18 18:09:20,099 - root - INFO - The train loss of epoch1-batch-300:0.2277599722146988
2018-12-18 18:10:24,497 - root - INFO - The train loss of epoch1-batch-350:0.2547980546951294
2018-12-18 18:10:50,179 - root - INFO - Epoch:1, train PA1:0.8979010234119551, MPA1:0.5043377687394488, MIoU1:0.4248467569909458, FWIoU1:0.8255249494734155
2018-12-18 18:10:50,218 - root - INFO - validating one epoch...
2018-12-18 18:11:23,330 - root - INFO - Epoch:1, validation PA1:0.9166050679110567, MPA1:0.5446108969222808, MIoU1:0.4546342303462631, FWIoU1:0.8567032554641715
2018-12-18 18:11:23,333 - root - INFO - The average loss of val loss:0.2671341742078463
2018-12-18 18:11:23,382 - root - INFO - =>saving a new best checkpoint...
2018-12-18 18:11:25,602 - root - INFO - Training one epoch...
2018-12-18 18:11:29,791 - root - INFO - The train loss of epoch2-batch-0:0.4005133807659149
2018-12-18 18:12:34,748 - root - INFO - The train loss of epoch2-batch-50:0.23912964761257172
2018-12-18 18:13:39,233 - root - INFO - The train loss of epoch2-batch-100:0.32318371534347534
2018-12-18 18:14:44,070 - root - INFO - The train loss of epoch2-batch-150:0.26054447889328003
2018-12-18 18:15:48,888 - root - INFO - The train loss of epoch2-batch-200:0.29560309648513794
2018-12-18 18:16:53,802 - root - INFO - The train loss of epoch2-batch-250:0.23331625759601593
2018-12-18 18:17:58,542 - root - INFO - The train loss of epoch2-batch-300:0.35474905371665955
2018-12-18 18:19:02,735 - root - INFO - The train loss of epoch2-batch-350:0.2132011204957962
2018-12-18 18:19:28,210 - root - INFO - Epoch:2, train PA1:0.9100428021529222, MPA1:0.5856068557499979, MIoU1:0.496803160184449, FWIoU1:0.8443194314057579
2018-12-18 18:19:28,219 - root - INFO - validating one epoch...
2018-12-18 18:20:01,143 - root - INFO - Epoch:2, validation PA1:0.9037186375849562, MPA1:0.5608774355961572, MIoU1:0.4355038954653795, FWIoU1:0.8418379547480819
2018-12-18 18:20:01,144 - root - INFO - The average loss of val loss:0.3054258202513059
2018-12-18 18:20:01,158 - root - INFO - => The MIoU of val does't improve.
2018-12-18 18:20:01,160 - root - INFO - Training one epoch...
2018-12-18 18:20:05,260 - root - INFO - The train loss of epoch3-batch-0:0.23327746987342834
2018-12-18 18:21:10,297 - root - INFO - The train loss of epoch3-batch-50:0.2950068712234497
2018-12-18 18:22:15,313 - root - INFO - The train loss of epoch3-batch-100:0.26180851459503174
2018-12-18 18:23:20,475 - root - INFO - The train loss of epoch3-batch-150:0.29220718145370483
2018-12-18 18:24:25,223 - root - INFO - The train loss of epoch3-batch-200:0.3865525722503662
2018-12-18 18:25:29,728 - root - INFO - The train loss of epoch3-batch-250:0.33540022373199463
2018-12-18 18:26:34,627 - root - INFO - The train loss of epoch3-batch-300:0.1981501430273056
2018-12-18 18:27:39,697 - root - INFO - The train loss of epoch3-batch-350:0.30841779708862305
2018-12-18 18:28:05,700 - root - INFO - Epoch:3, train PA1:0.9157526169801089, MPA1:0.6329592164362184, MIoU1:0.5379113233509486, FWIoU1:0.8532861703741821
2018-12-18 18:28:05,725 - root - INFO - validating one epoch...
2018-12-18 18:28:38,478 - root - INFO - Epoch:3, validation PA1:0.8574840134970824, MPA1:0.5370892807220976, MIoU1:0.4329318397738627, FWIoU1:0.7715159495061247
2018-12-18 18:28:38,479 - root - INFO - The average loss of val loss:0.44759037271142005
2018-12-18 18:28:38,503 - root - INFO - => The MIoU of val does't improve.
2018-12-18 18:28:38,506 - root - INFO - Training one epoch...
2018-12-18 18:28:42,653 - root - INFO - The train loss of epoch4-batch-0:0.2021380215883255
2018-12-18 18:29:47,929 - root - INFO - The train loss of epoch4-batch-50:0.38768818974494934
2018-12-18 18:30:52,583 - root - INFO - The train loss of epoch4-batch-100:0.2618211507797241
2018-12-18 18:31:57,221 - root - INFO - The train loss of epoch4-batch-150:0.24625049531459808
2018-12-18 18:33:02,198 - root - INFO - The train loss of epoch4-batch-200:0.22278162837028503
2018-12-18 18:34:07,177 - root - INFO - The train loss of epoch4-batch-250:0.20667491853237152
2018-12-18 18:35:12,159 - root - INFO - The train loss of epoch4-batch-300:0.15483419597148895
2018-12-18 18:36:17,061 - root - INFO - The train loss of epoch4-batch-350:0.30523592233657837
2018-12-18 18:36:42,882 - root - INFO - Epoch:4, train PA1:0.9236720510138721, MPA1:0.6729472655084068, MIoU1:0.5761747398684497, FWIoU1:0.8657457560432039
2018-12-18 18:36:42,952 - root - INFO - validating one epoch...
2018-12-18 18:37:15,582 - root - INFO - Epoch:4, validation PA1:0.8831103968028474, MPA1:0.5817280940269632, MIoU1:0.45809846638164475, FWIoU1:0.8265372531017151
2018-12-18 18:37:15,584 - root - INFO - The average loss of val loss:0.35091202730933824
2018-12-18 18:37:15,610 - root - INFO - =>saving a new best checkpoint...
2018-12-18 18:37:17,957 - root - INFO - Training one epoch...
2018-12-18 18:37:22,162 - root - INFO - The train loss of epoch5-batch-0:0.37704411149024963
2018-12-18 18:38:26,929 - root - INFO - The train loss of epoch5-batch-50:0.211386039853096
2018-12-18 18:39:32,090 - root - INFO - The train loss of epoch5-batch-100:0.15443618595600128
2018-12-18 18:40:37,044 - root - INFO - The train loss of epoch5-batch-150:0.23821282386779785
2018-12-18 18:41:42,057 - root - INFO - The train loss of epoch5-batch-200:0.24138842523097992
2018-12-18 18:42:46,805 - root - INFO - The train loss of epoch5-batch-250:0.1764390617609024
2018-12-18 18:43:51,692 - root - INFO - The train loss of epoch5-batch-300:0.29981303215026855
2018-12-18 18:44:56,484 - root - INFO - The train loss of epoch5-batch-350:0.24291643500328064
2018-12-18 18:45:22,462 - root - INFO - Epoch:5, train PA1:0.9266292599665544, MPA1:0.6830178874331835, MIoU1:0.585350615431899, FWIoU1:0.8706539236873176
2018-12-18 18:45:22,502 - root - INFO - validating one epoch...
2018-12-18 18:45:55,540 - root - INFO - Epoch:5, validation PA1:0.8411199851420524, MPA1:0.5310966878897643, MIoU1:0.4288965900677372, FWIoU1:0.7507006417640146
2018-12-18 18:45:55,541 - root - INFO - The average loss of val loss:0.5062937453389168
2018-12-18 18:45:55,551 - root - INFO - => The MIoU of val does't improve.
2018-12-18 18:45:55,560 - root - INFO - Training one epoch...
2018-12-18 18:45:59,717 - root - INFO - The train loss of epoch6-batch-0:0.1880403757095337
2018-12-18 18:47:04,898 - root - INFO - The train loss of epoch6-batch-50:0.2210015505552292
2018-12-18 18:48:09,966 - root - INFO - The train loss of epoch6-batch-100:0.18174408376216888
2018-12-18 18:49:14,738 - root - INFO - The train loss of epoch6-batch-150:0.13608503341674805
2018-12-18 18:50:19,776 - root - INFO - The train loss of epoch6-batch-200:0.34789055585861206
2018-12-18 18:51:24,818 - root - INFO - The train loss of epoch6-batch-250:0.22066549956798553
2018-12-18 18:52:29,508 - root - INFO - The train loss of epoch6-batch-300:0.1755780130624771
2018-12-18 18:53:34,044 - root - INFO - The train loss of epoch6-batch-350:0.24560095369815826
2018-12-18 18:53:59,997 - root - INFO - Epoch:6, train PA1:0.9294267261001503, MPA1:0.7007600925292443, MIoU1:0.603543050746942, FWIoU1:0.8751237605871653
2018-12-18 18:54:00,017 - root - INFO - validating one epoch...
2018-12-18 18:54:32,642 - root - INFO - Epoch:6, validation PA1:0.9232145109909883, MPA1:0.6347520526146259, MIoU1:0.536914695568311, FWIoU1:0.8667830088201138
2018-12-18 18:54:32,643 - root - INFO - The average loss of val loss:0.23243560989697773
2018-12-18 18:54:32,687 - root - INFO - =>saving a new best checkpoint...
2018-12-18 18:54:34,994 - root - INFO - Training one epoch...
2018-12-18 18:54:39,363 - root - INFO - The train loss of epoch7-batch-0:0.1845172643661499
2018-12-18 18:55:44,561 - root - INFO - The train loss of epoch7-batch-50:0.14596372842788696
2018-12-18 18:56:49,177 - root - INFO - The train loss of epoch7-batch-100:0.1645369529724121
2018-12-18 18:57:54,234 - root - INFO - The train loss of epoch7-batch-150:0.22754640877246857
2018-12-18 18:58:59,391 - root - INFO - The train loss of epoch7-batch-200:0.20612098276615143
2018-12-18 19:00:03,962 - root - INFO - The train loss of epoch7-batch-250:0.13677076995372772
2018-12-18 19:01:08,708 - root - INFO - The train loss of epoch7-batch-300:0.3023289442062378
2018-12-18 19:02:13,622 - root - INFO - The train loss of epoch7-batch-350:0.15888892114162445
2018-12-18 19:02:39,614 - root - INFO - Epoch:7, train PA1:0.930495356212701, MPA1:0.69956438086973, MIoU1:0.6011412275622511, FWIoU1:0.8770737401163605
2018-12-18 19:02:39,659 - root - INFO - validating one epoch...
2018-12-18 19:03:13,184 - root - INFO - Epoch:7, validation PA1:0.9219736106056861, MPA1:0.6467300626540183, MIoU1:0.5228643722870597, FWIoU1:0.8757266931451436
2018-12-18 19:03:13,185 - root - INFO - The average loss of val loss:0.23603716591993967
2018-12-18 19:03:13,215 - root - INFO - => The MIoU of val does't improve.
2018-12-18 19:03:13,219 - root - INFO - Training one epoch...
2018-12-18 19:03:17,639 - root - INFO - The train loss of epoch8-batch-0:0.2304859757423401
2018-12-18 19:04:22,048 - root - INFO - The train loss of epoch8-batch-50:0.149722158908844
2018-12-18 19:05:27,221 - root - INFO - The train loss of epoch8-batch-100:0.15301956236362457
2018-12-18 19:06:31,717 - root - INFO - The train loss of epoch8-batch-150:0.1392529010772705
2018-12-18 19:07:36,261 - root - INFO - The train loss of epoch8-batch-200:0.2594206929206848
2018-12-18 19:08:40,913 - root - INFO - The train loss of epoch8-batch-250:0.15666936337947845
2018-12-18 19:09:45,217 - root - INFO - The train loss of epoch8-batch-300:0.18884795904159546
2018-12-18 19:10:49,535 - root - INFO - The train loss of epoch8-batch-350:0.3278537392616272
2018-12-18 19:11:14,738 - root - INFO - Epoch:8, train PA1:0.9339701464544936, MPA1:0.7255693460162194, MIoU1:0.6295343535670126, FWIoU1:0.8823275684592258
2018-12-18 19:11:14,791 - root - INFO - validating one epoch...
2018-12-18 19:11:48,252 - root - INFO - Epoch:8, validation PA1:0.9294059880698022, MPA1:0.6669902725253402, MIoU1:0.5538947601846523, FWIoU1:0.8770885098640422
2018-12-18 19:11:48,253 - root - INFO - The average loss of val loss:0.21972065741817157
2018-12-18 19:11:48,274 - root - INFO - =>saving a new best checkpoint...
2018-12-18 19:11:50,506 - root - INFO - Training one epoch...
2018-12-18 19:11:54,942 - root - INFO - The train loss of epoch9-batch-0:0.2543436288833618
2018-12-18 19:12:59,588 - root - INFO - The train loss of epoch9-batch-50:0.13955341279506683
2018-12-18 19:14:03,881 - root - INFO - The train loss of epoch9-batch-100:0.19795522093772888
2018-12-18 19:15:08,748 - root - INFO - The train loss of epoch9-batch-150:0.20025207102298737
2018-12-18 19:16:13,104 - root - INFO - The train loss of epoch9-batch-200:0.17306740581989288
2018-12-18 19:17:18,457 - root - INFO - The train loss of epoch9-batch-250:0.40179750323295593
2018-12-18 19:18:23,773 - root - INFO - The train loss of epoch9-batch-300:0.16001413762569427
2018-12-18 19:19:28,828 - root - INFO - The train loss of epoch9-batch-350:0.17847847938537598
2018-12-18 19:19:54,737 - root - INFO - Epoch:9, train PA1:0.9339595814523568, MPA1:0.7241875446607036, MIoU1:0.6262966821457809, FWIoU1:0.8825053819821521
2018-12-18 19:19:54,768 - root - INFO - validating one epoch...
2018-12-18 19:20:28,350 - root - INFO - Epoch:9, validation PA1:0.9314855797282106, MPA1:0.6509329510413143, MIoU1:0.5460433989278686, FWIoU1:0.8839654039905053
2018-12-18 19:20:28,366 - root - INFO - The average loss of val loss:0.21291297773520151
2018-12-18 19:20:28,382 - root - INFO - => The MIoU of val does't improve.
2018-12-18 19:20:28,385 - root - INFO - Training one epoch...
2018-12-18 19:20:32,461 - root - INFO - The train loss of epoch10-batch-0:0.24839000403881073
2018-12-18 19:21:37,639 - root - INFO - The train loss of epoch10-batch-50:0.13113521039485931
2018-12-18 19:22:42,660 - root - INFO - The train loss of epoch10-batch-100:0.15095631778240204
2018-12-18 19:23:47,210 - root - INFO - The train loss of epoch10-batch-150:0.2521631121635437
2018-12-18 19:24:52,124 - root - INFO - The train loss of epoch10-batch-200:0.19150614738464355
2018-12-18 19:25:57,215 - root - INFO - The train loss of epoch10-batch-250:0.3524303734302521
2018-12-18 19:27:02,474 - root - INFO - The train loss of epoch10-batch-300:0.14330150187015533
2018-12-18 19:28:07,330 - root - INFO - The train loss of epoch10-batch-350:0.1593557745218277
2018-12-18 19:28:33,448 - root - INFO - Epoch:10, train PA1:0.9358452600425767, MPA1:0.7330317919903099, MIoU1:0.6364464710873552, FWIoU1:0.8853124160255903
2018-12-18 19:28:33,501 - root - INFO - validating one epoch...
2018-12-18 19:29:06,599 - root - INFO - Epoch:10, validation PA1:0.9403489386650664, MPA1:0.6969354819734873, MIoU1:0.5938936422442808, FWIoU1:0.8944278810838711
2018-12-18 19:29:06,615 - root - INFO - The average loss of val loss:0.18160311890145142
2018-12-18 19:29:06,632 - root - INFO - =>saving a new best checkpoint...
2018-12-18 19:29:08,958 - root - INFO - Training one epoch...
2018-12-18 19:29:12,477 - root - INFO - The train loss of epoch11-batch-0:0.1618078649044037
2018-12-18 19:30:17,707 - root - INFO - The train loss of epoch11-batch-50:0.14243601262569427
2018-12-18 19:31:22,154 - root - INFO - The train loss of epoch11-batch-100:0.18501533567905426
2018-12-18 19:32:27,420 - root - INFO - The train loss of epoch11-batch-150:0.21018964052200317
2018-12-18 19:33:31,610 - root - INFO - The train loss of epoch11-batch-200:0.17587348818778992
2018-12-18 19:34:36,452 - root - INFO - The train loss of epoch11-batch-250:0.16226692497730255
2018-12-18 19:35:41,120 - root - INFO - The train loss of epoch11-batch-300:0.22493338584899902
2018-12-18 19:36:45,999 - root - INFO - The train loss of epoch11-batch-350:0.1657806932926178
2018-12-18 19:37:11,833 - root - INFO - Epoch:11, train PA1:0.9393757329368527, MPA1:0.7510788428490974, MIoU1:0.6577778741541717, FWIoU1:0.8912564430128367
2018-12-18 19:37:11,884 - root - INFO - validating one epoch...
2018-12-18 19:37:44,751 - root - INFO - Epoch:11, validation PA1:0.9122872900549988, MPA1:0.6577314054860469, MIoU1:0.5494008779310471, FWIoU1:0.8629371195342281
2018-12-18 19:37:44,758 - root - INFO - The average loss of val loss:0.2674748100340366
2018-12-18 19:37:44,765 - root - INFO - => The MIoU of val does't improve.
2018-12-18 19:37:44,767 - root - INFO - Training one epoch...
2018-12-18 19:37:49,162 - root - INFO - The train loss of epoch12-batch-0:0.2052403837442398
2018-12-18 19:38:54,406 - root - INFO - The train loss of epoch12-batch-50:0.23103412985801697
2018-12-18 19:39:59,775 - root - INFO - The train loss of epoch12-batch-100:0.14761656522750854
2018-12-18 19:41:05,118 - root - INFO - The train loss of epoch12-batch-150:0.1412099152803421
2018-12-18 19:42:09,855 - root - INFO - The train loss of epoch12-batch-200:0.12884801626205444
2018-12-18 19:43:14,759 - root - INFO - The train loss of epoch12-batch-250:0.14161798357963562
2018-12-18 19:44:19,676 - root - INFO - The train loss of epoch12-batch-300:0.1914982944726944
2018-12-18 19:45:24,344 - root - INFO - The train loss of epoch12-batch-350:0.3062630295753479
2018-12-18 19:45:50,124 - root - INFO - Epoch:12, train PA1:0.9401579951411837, MPA1:0.7620410106257356, MIoU1:0.6687908554358432, FWIoU1:0.8925418532506475
2018-12-18 19:45:50,173 - root - INFO - validating one epoch...
2018-12-18 19:46:23,243 - root - INFO - Epoch:12, validation PA1:0.9184336929918867, MPA1:0.6667787872716044, MIoU1:0.5466537362637002, FWIoU1:0.8724086092600938
2018-12-18 19:46:23,244 - root - INFO - The average loss of val loss:0.25291464005907377
2018-12-18 19:46:23,282 - root - INFO - => The MIoU of val does't improve.
2018-12-18 19:46:23,285 - root - INFO - Training one epoch...
2018-12-18 19:46:27,781 - root - INFO - The train loss of epoch13-batch-0:0.16690053045749664
2018-12-18 19:47:32,986 - root - INFO - The train loss of epoch13-batch-50:0.24872185289859772
2018-12-18 19:48:38,219 - root - INFO - The train loss of epoch13-batch-100:0.13908275961875916
2018-12-18 19:49:42,461 - root - INFO - The train loss of epoch13-batch-150:0.27462950348854065
2018-12-18 19:50:47,069 - root - INFO - The train loss of epoch13-batch-200:0.17548222839832306
2018-12-18 19:51:51,661 - root - INFO - The train loss of epoch13-batch-250:0.13257934153079987
2018-12-18 19:52:55,784 - root - INFO - The train loss of epoch13-batch-300:0.19126862287521362
2018-12-18 19:53:59,814 - root - INFO - The train loss of epoch13-batch-350:0.18346337974071503
2018-12-18 19:54:25,408 - root - INFO - Epoch:13, train PA1:0.9408030625406253, MPA1:0.7534308444373934, MIoU1:0.6569014228541, FWIoU1:0.8935508821983339
2018-12-18 19:54:25,480 - root - INFO - validating one epoch...
2018-12-18 19:54:57,620 - root - INFO - Epoch:13, validation PA1:0.9410475061759789, MPA1:0.6902055241402256, MIoU1:0.5733349945215238, FWIoU1:0.895890132968813
2018-12-18 19:54:57,621 - root - INFO - The average loss of val loss:0.17839479595422744
2018-12-18 19:54:57,645 - root - INFO - => The MIoU of val does't improve.
2018-12-18 19:54:57,649 - root - INFO - Training one epoch...
2018-12-18 19:55:01,569 - root - INFO - The train loss of epoch14-batch-0:0.14520910382270813
2018-12-18 19:56:06,745 - root - INFO - The train loss of epoch14-batch-50:0.17771779000759125
2018-12-18 19:57:12,153 - root - INFO - The train loss of epoch14-batch-100:0.10293503850698471
2018-12-18 19:58:17,324 - root - INFO - The train loss of epoch14-batch-150:0.15083211660385132
2018-12-18 19:59:22,805 - root - INFO - The train loss of epoch14-batch-200:0.31691259145736694
2018-12-18 20:00:27,469 - root - INFO - The train loss of epoch14-batch-250:0.211755633354187
2018-12-18 20:01:31,736 - root - INFO - The train loss of epoch14-batch-300:0.15638764202594757
2018-12-18 20:02:36,241 - root - INFO - The train loss of epoch14-batch-350:0.34345725178718567
2018-12-18 20:03:01,823 - root - INFO - Epoch:14, train PA1:0.9421673776131174, MPA1:0.7665790321942816, MIoU1:0.6732654702835479, FWIoU1:0.8956839989378136
2018-12-18 20:03:01,856 - root - INFO - validating one epoch...
2018-12-18 20:03:34,877 - root - INFO - Epoch:14, validation PA1:0.9438643677498854, MPA1:0.7245745933300014, MIoU1:0.6179088125626023, FWIoU1:0.8999981810598052
2018-12-18 20:03:34,877 - root - INFO - The average loss of val loss:0.1702232576906681
2018-12-18 20:03:34,890 - root - INFO - =>saving a new best checkpoint...
2018-12-18 20:03:37,115 - root - INFO - Training one epoch...
2018-12-18 20:03:41,458 - root - INFO - The train loss of epoch15-batch-0:0.2614882290363312
2018-12-18 20:04:46,848 - root - INFO - The train loss of epoch15-batch-50:0.19268763065338135
2018-12-18 20:05:51,563 - root - INFO - The train loss of epoch15-batch-100:0.11704146862030029
2018-12-18 20:06:56,367 - root - INFO - The train loss of epoch15-batch-150:0.15954241156578064
2018-12-18 20:08:01,043 - root - INFO - The train loss of epoch15-batch-200:0.1929319202899933
2018-12-18 20:09:05,564 - root - INFO - The train loss of epoch15-batch-250:0.1764129102230072
2018-12-18 20:10:10,112 - root - INFO - The train loss of epoch15-batch-300:0.15164799988269806
2018-12-18 20:11:14,497 - root - INFO - The train loss of epoch15-batch-350:0.17461596429347992
2018-12-18 20:11:40,117 - root - INFO - Epoch:15, train PA1:0.9423094653538722, MPA1:0.7748046725700417, MIoU1:0.6844258857459691, FWIoU1:0.8957843089817441
2018-12-18 20:11:40,134 - root - INFO - validating one epoch...
2018-12-18 20:12:13,509 - root - INFO - Epoch:15, validation PA1:0.9448086283634548, MPA1:0.7163884695231272, MIoU1:0.6182856583687366, FWIoU1:0.9014891537426115
2018-12-18 20:12:13,510 - root - INFO - The average loss of val loss:0.16849819868803023
2018-12-18 20:12:13,534 - root - INFO - =>saving a new best checkpoint...
2018-12-18 20:12:15,930 - root - INFO - Training one epoch...
2018-12-18 20:12:20,162 - root - INFO - The train loss of epoch16-batch-0:0.15957914292812347
2018-12-18 20:13:25,212 - root - INFO - The train loss of epoch16-batch-50:0.14450252056121826
2018-12-18 20:14:30,107 - root - INFO - The train loss of epoch16-batch-100:0.1636045277118683
2018-12-18 20:15:34,687 - root - INFO - The train loss of epoch16-batch-150:0.12047276645898819
2018-12-18 20:16:39,595 - root - INFO - The train loss of epoch16-batch-200:0.14625205099582672
2018-12-18 20:17:44,146 - root - INFO - The train loss of epoch16-batch-250:0.1574355810880661
2018-12-18 20:18:49,110 - root - INFO - The train loss of epoch16-batch-300:0.13835546374320984
2018-12-18 20:19:53,505 - root - INFO - The train loss of epoch16-batch-350:0.14078392088413239
2018-12-18 20:20:19,289 - root - INFO - Epoch:16, train PA1:0.9429132224532897, MPA1:0.7744933239507005, MIoU1:0.6836995186969469, FWIoU1:0.8971460449163112
2018-12-18 20:20:19,342 - root - INFO - validating one epoch...
2018-12-18 20:20:52,816 - root - INFO - Epoch:16, validation PA1:0.9295922933128898, MPA1:0.6912525301787843, MIoU1:0.5733509616718301, FWIoU1:0.8790833167585408
2018-12-18 20:20:52,817 - root - INFO - The average loss of val loss:0.2130578012516101
2018-12-18 20:20:52,836 - root - INFO - => The MIoU of val does't improve.
2018-12-18 20:20:52,848 - root - INFO - Training one epoch...
2018-12-18 20:20:57,145 - root - INFO - The train loss of epoch17-batch-0:0.25244587659835815
2018-12-18 20:22:01,698 - root - INFO - The train loss of epoch17-batch-50:0.2321009337902069
2018-12-18 20:23:06,148 - root - INFO - The train loss of epoch17-batch-100:0.17145299911499023
2018-12-18 20:24:10,658 - root - INFO - The train loss of epoch17-batch-150:0.17164234817028046
2018-12-18 20:25:15,444 - root - INFO - The train loss of epoch17-batch-200:0.11563270539045334
2018-12-18 20:26:20,624 - root - INFO - The train loss of epoch17-batch-250:0.16745223104953766
2018-12-18 20:27:25,568 - root - INFO - The train loss of epoch17-batch-300:0.14713479578495026
2018-12-18 20:28:30,279 - root - INFO - The train loss of epoch17-batch-350:0.14539752900600433
2018-12-18 20:28:56,174 - root - INFO - Epoch:17, train PA1:0.9433008172122366, MPA1:0.7864851781717979, MIoU1:0.6956367459237418, FWIoU1:0.8976375169991887
2018-12-18 20:28:56,207 - root - INFO - validating one epoch...
2018-12-18 20:29:29,705 - root - INFO - Epoch:17, validation PA1:0.9334660191662404, MPA1:0.7011627852378183, MIoU1:0.5877653010026989, FWIoU1:0.8892409348064026
2018-12-18 20:29:29,705 - root - INFO - The average loss of val loss:0.19969254322350025
2018-12-18 20:29:29,717 - root - INFO - => The MIoU of val does't improve.
2018-12-18 20:29:29,733 - root - INFO - Training one epoch...
2018-12-18 20:29:34,023 - root - INFO - The train loss of epoch18-batch-0:0.21643395721912384
2018-12-18 20:30:39,285 - root - INFO - The train loss of epoch18-batch-50:0.17635034024715424
2018-12-18 20:31:44,300 - root - INFO - The train loss of epoch18-batch-100:0.16672705113887787
2018-12-18 20:32:48,839 - root - INFO - The train loss of epoch18-batch-150:0.17838694155216217
2018-12-18 20:33:53,741 - root - INFO - The train loss of epoch18-batch-200:0.14791592955589294
2018-12-18 20:34:58,281 - root - INFO - The train loss of epoch18-batch-250:0.16366077959537506
2018-12-18 20:36:03,209 - root - INFO - The train loss of epoch18-batch-300:0.13870352506637573
2018-12-18 20:37:07,375 - root - INFO - The train loss of epoch18-batch-350:0.1058892011642456
2018-12-18 20:37:33,125 - root - INFO - Epoch:18, train PA1:0.944386398676223, MPA1:0.7853385337451082, MIoU1:0.6958008886736619, FWIoU1:0.8993591447991004
2018-12-18 20:37:33,144 - root - INFO - validating one epoch...
2018-12-18 20:38:06,041 - root - INFO - Epoch:18, validation PA1:0.9459221030784994, MPA1:0.7306452035997707, MIoU1:0.6359275890202426, FWIoU1:0.9038977057056491
2018-12-18 20:38:06,042 - root - INFO - The average loss of val loss:0.16441830880939962
2018-12-18 20:38:06,054 - root - INFO - =>saving a new best checkpoint...
2018-12-18 20:38:08,460 - root - INFO - Training one epoch...
2018-12-18 20:38:12,488 - root - INFO - The train loss of epoch19-batch-0:0.16841423511505127
2018-12-18 20:39:17,494 - root - INFO - The train loss of epoch19-batch-50:0.10104074329137802
2018-12-18 20:40:22,222 - root - INFO - The train loss of epoch19-batch-100:0.1456380933523178
2018-12-18 20:41:27,193 - root - INFO - The train loss of epoch19-batch-150:0.255957692861557
2018-12-18 20:42:32,245 - root - INFO - The train loss of epoch19-batch-200:0.13288024067878723
2018-12-18 20:43:37,357 - root - INFO - The train loss of epoch19-batch-250:0.1746065765619278
2018-12-18 20:44:42,692 - root - INFO - The train loss of epoch19-batch-300:0.17048141360282898
2018-12-18 20:45:47,906 - root - INFO - The train loss of epoch19-batch-350:0.1366855502128601
2018-12-18 20:46:13,372 - root - INFO - Epoch:19, train PA1:0.9455643983468177, MPA1:0.7929625502534144, MIoU1:0.7052089507270093, FWIoU1:0.9014439666063855
2018-12-18 20:46:13,428 - root - INFO - validating one epoch...
2018-12-18 20:46:46,442 - root - INFO - Epoch:19, validation PA1:0.944115062030996, MPA1:0.7191019458020097, MIoU1:0.6228229245672235, FWIoU1:0.9013599757088065
2018-12-18 20:46:46,442 - root - INFO - The average loss of val loss:0.1692914852251609
2018-12-18 20:46:46,457 - root - INFO - => The MIoU of val does't improve.
2018-12-18 20:46:46,473 - root - INFO - Training one epoch...
2018-12-18 20:46:50,993 - root - INFO - The train loss of epoch20-batch-0:0.08681883662939072
2018-12-18 20:47:56,574 - root - INFO - The train loss of epoch20-batch-50:0.189102441072464
2018-12-18 20:49:00,703 - root - INFO - The train loss of epoch20-batch-100:0.1571447253227234
2018-12-18 20:50:05,596 - root - INFO - The train loss of epoch20-batch-150:0.2017572522163391
2018-12-18 20:51:10,737 - root - INFO - The train loss of epoch20-batch-200:0.21423286199569702
2018-12-18 20:52:15,822 - root - INFO - The train loss of epoch20-batch-250:0.10780389606952667
2018-12-18 20:53:20,763 - root - INFO - The train loss of epoch20-batch-300:0.14594118297100067
2018-12-18 20:54:25,192 - root - INFO - The train loss of epoch20-batch-350:0.17143136262893677
2018-12-18 20:54:50,716 - root - INFO - Epoch:20, train PA1:0.9437602573699422, MPA1:0.7867101409831431, MIoU1:0.6966610643326316, FWIoU1:0.898266648860444
2018-12-18 20:54:50,776 - root - INFO - validating one epoch...
2018-12-18 20:55:24,057 - root - INFO - Epoch:20, validation PA1:0.9467881510700439, MPA1:0.7398787412722854, MIoU1:0.6373765273179689, FWIoU1:0.9052318694372462
2018-12-18 20:55:24,058 - root - INFO - The average loss of val loss:0.1622872749964396
2018-12-18 20:55:24,074 - root - INFO - =>saving a new best checkpoint...
2018-12-18 20:55:26,447 - root - INFO - Training one epoch...
2018-12-18 20:55:30,747 - root - INFO - The train loss of epoch21-batch-0:0.14226184785366058
2018-12-18 20:56:35,862 - root - INFO - The train loss of epoch21-batch-50:0.1911349892616272
2018-12-18 20:57:40,880 - root - INFO - The train loss of epoch21-batch-100:0.11470108479261398
2018-12-18 20:58:45,638 - root - INFO - The train loss of epoch21-batch-150:0.14694887399673462
2018-12-18 20:59:50,452 - root - INFO - The train loss of epoch21-batch-200:0.1571952849626541
2018-12-18 21:00:55,069 - root - INFO - The train loss of epoch21-batch-250:0.3492722511291504
2018-12-18 21:02:00,154 - root - INFO - The train loss of epoch21-batch-300:0.20744803547859192
2018-12-18 21:03:04,568 - root - INFO - The train loss of epoch21-batch-350:0.1565888673067093
2018-12-18 21:03:30,128 - root - INFO - Epoch:21, train PA1:0.944290905246968, MPA1:0.7829938832610014, MIoU1:0.6907463336124753, FWIoU1:0.8992335036672203
2018-12-18 21:03:30,155 - root - INFO - validating one epoch...
2018-12-18 21:04:04,128 - root - INFO - Epoch:21, validation PA1:0.9459752099662178, MPA1:0.7054544204065417, MIoU1:0.6111470037443739, FWIoU1:0.9040496847391244
2018-12-18 21:04:04,129 - root - INFO - The average loss of val loss:0.16604854712883632
2018-12-18 21:04:04,145 - root - INFO - => The MIoU of val does't improve.
2018-12-18 21:04:04,147 - root - INFO - Training one epoch...
2018-12-18 21:04:08,213 - root - INFO - The train loss of epoch22-batch-0:0.13158424198627472
2018-12-18 21:05:13,398 - root - INFO - The train loss of epoch22-batch-50:0.16452105343341827
2018-12-18 21:06:18,316 - root - INFO - The train loss of epoch22-batch-100:0.14206638932228088
2018-12-18 21:07:22,939 - root - INFO - The train loss of epoch22-batch-150:0.13465645909309387
2018-12-18 21:08:28,024 - root - INFO - The train loss of epoch22-batch-200:0.1662217527627945
2018-12-18 21:09:33,128 - root - INFO - The train loss of epoch22-batch-250:0.14530473947525024
2018-12-18 21:10:37,709 - root - INFO - The train loss of epoch22-batch-300:0.15261170268058777
2018-12-18 21:11:41,577 - root - INFO - The train loss of epoch22-batch-350:0.17116686701774597
2018-12-18 21:12:07,192 - root - INFO - Epoch:22, train PA1:0.947835264814818, MPA1:0.7954685282794163, MIoU1:0.7061149143300384, FWIoU1:0.9052230812253733
2018-12-18 21:12:07,279 - root - INFO - validating one epoch...
2018-12-18 21:12:40,345 - root - INFO - Epoch:22, validation PA1:0.9449573476174735, MPA1:0.7275681466866422, MIoU1:0.6292063697750025, FWIoU1:0.9021256617951056
2018-12-18 21:12:40,354 - root - INFO - The average loss of val loss:0.16765810772776604
2018-12-18 21:12:40,366 - root - INFO - => The MIoU of val does't improve.
2018-12-18 21:12:40,369 - root - INFO - Training one epoch...
2018-12-18 21:12:44,352 - root - INFO - The train loss of epoch23-batch-0:0.1760014146566391
2018-12-18 21:13:49,970 - root - INFO - The train loss of epoch23-batch-50:0.143947571516037
2018-12-18 21:14:54,856 - root - INFO - The train loss of epoch23-batch-100:0.14431309700012207
2018-12-18 21:15:58,950 - root - INFO - The train loss of epoch23-batch-150:0.14898404479026794
2018-12-18 21:17:03,981 - root - INFO - The train loss of epoch23-batch-200:0.13141568005084991
2018-12-18 21:18:08,624 - root - INFO - The train loss of epoch23-batch-250:0.1928367167711258
2018-12-18 21:19:13,691 - root - INFO - The train loss of epoch23-batch-300:0.1739724576473236
2018-12-18 21:20:18,267 - root - INFO - The train loss of epoch23-batch-350:0.16008594632148743
2018-12-18 21:20:44,072 - root - INFO - Epoch:23, train PA1:0.94781723884772, MPA1:0.8034268068990164, MIoU1:0.7155526445862063, FWIoU1:0.9050301532738889
2018-12-18 21:20:44,108 - root - INFO - validating one epoch...
2018-12-18 21:21:17,029 - root - INFO - Epoch:23, validation PA1:0.9461742042246925, MPA1:0.7285891626063302, MIoU1:0.6337508968323026, FWIoU1:0.9044515305842916
2018-12-18 21:21:17,030 - root - INFO - The average loss of val loss:0.16250515629847845
2018-12-18 21:21:17,061 - root - INFO - => The MIoU of val does't improve.
.......
2018-12-19 04:21:55,198 - root - INFO - Training one epoch...
2018-12-19 04:21:59,250 - root - INFO - The train loss of epoch73-batch-0:0.10735165327787399
2018-12-19 04:23:03,634 - root - INFO - The train loss of epoch73-batch-50:0.11953388154506683
2018-12-19 04:24:08,034 - root - INFO - The train loss of epoch73-batch-100:0.1404481679201126
2018-12-19 04:25:11,733 - root - INFO - The train loss of epoch73-batch-150:0.10422152280807495
2018-12-19 04:26:15,410 - root - INFO - The train loss of epoch73-batch-200:0.10750967264175415
2018-12-19 04:27:19,665 - root - INFO - The train loss of epoch73-batch-250:0.10679195821285248
2018-12-19 04:28:22,838 - root - INFO - The train loss of epoch73-batch-300:0.1266743540763855
2018-12-19 04:29:26,887 - root - INFO - The train loss of epoch73-batch-350:0.09310479462146759
2018-12-19 04:29:52,970 - root - INFO - Epoch:73, train PA1:0.9604127453852558, MPA1:0.8597448172736882, MIoU1:0.7860503715228457, FWIoU1:0.9266337248024643
2018-12-19 04:29:52,984 - root - INFO - validating one epoch...
2018-12-19 04:30:25,138 - root - INFO - Epoch:73, validation PA1:0.9511571025089006, MPA1:0.7665008734773413, MIoU1:0.662022163261123, FWIoU1:0.9136233963576832
2018-12-19 04:30:25,139 - root - INFO - The average loss of val loss:0.1472104558100303
2018-12-19 04:30:25,149 - root - INFO - => The MIoU of val does't improve.
2018-12-19 04:30:25,151 - root - INFO - Training one epoch...
2018-12-19 04:30:28,762 - root - INFO - The train loss of epoch74-batch-0:0.12557663023471832
2018-12-19 04:31:33,067 - root - INFO - The train loss of epoch74-batch-50:0.11147694289684296
2018-12-19 04:32:36,950 - root - INFO - The train loss of epoch74-batch-100:0.12636703252792358
2018-12-19 04:33:41,264 - root - INFO - The train loss of epoch74-batch-150:0.12676572799682617
2018-12-19 04:34:45,241 - root - INFO - The train loss of epoch74-batch-200:0.11778289079666138
2018-12-19 04:35:49,622 - root - INFO - The train loss of epoch74-batch-250:0.09495686739683151
2018-12-19 04:36:53,827 - root - INFO - The train loss of epoch74-batch-300:0.07566681504249573
2018-12-19 04:37:58,119 - root - INFO - The train loss of epoch74-batch-350:0.09327321499586105
2018-12-19 04:38:24,244 - root - INFO - Epoch:74, train PA1:0.9602285383021301, MPA1:0.8613960024248566, MIoU1:0.7879539663485767, FWIoU1:0.9262973622830788
2018-12-19 04:38:24,257 - root - INFO - validating one epoch...
2018-12-19 04:38:56,127 - root - INFO - Epoch:74, validation PA1:0.955175245332083, MPA1:0.7817220007038588, MIoU1:0.6860386178922512, FWIoU1:0.9188770514238537
2018-12-19 04:38:56,127 - root - INFO - The average loss of val loss:0.13517877807219822
2018-12-19 04:38:56,148 - root - INFO - =>saving a new best checkpoint...
2018-12-19 04:38:58,852 - root - INFO - Training one epoch...
2018-12-19 04:39:03,312 - root - INFO - The train loss of epoch75-batch-0:0.09399298578500748
2018-12-19 04:40:06,858 - root - INFO - The train loss of epoch75-batch-50:0.10341855138540268
2018-12-19 04:41:11,131 - root - INFO - The train loss of epoch75-batch-100:0.12183348834514618
2018-12-19 04:42:15,038 - root - INFO - The train loss of epoch75-batch-150:0.12554271519184113
2018-12-19 04:43:18,550 - root - INFO - The train loss of epoch75-batch-200:0.12364111095666885
2018-12-19 04:44:21,920 - root - INFO - The train loss of epoch75-batch-250:0.14163315296173096
2018-12-19 04:45:25,960 - root - INFO - The train loss of epoch75-batch-300:0.10890746116638184
2018-12-19 04:46:29,641 - root - INFO - The train loss of epoch75-batch-350:0.13599912822246552
2018-12-19 04:46:55,696 - root - INFO - Epoch:75, train PA1:0.9605498157059398, MPA1:0.8581125234437309, MIoU1:0.7833049103705904, FWIoU1:0.9268799863182888
2018-12-19 04:46:55,723 - root - INFO - validating one epoch...
2018-12-19 04:47:27,494 - root - INFO - Epoch:75, validation PA1:0.9543065197586077, MPA1:0.774361983627906, MIoU1:0.6767866515912244, FWIoU1:0.917419925408064
2018-12-19 04:47:27,495 - root - INFO - The average loss of val loss:0.13999862497051557
2018-12-19 04:47:27,503 - root - INFO - => The MIoU of val does't improve.
2018-12-19 04:47:27,505 - root - INFO - Training one epoch...
2018-12-19 04:47:31,545 - root - INFO - The train loss of epoch76-batch-0:0.12370521575212479
2018-12-19 04:48:35,324 - root - INFO - The train loss of epoch76-batch-50:0.12831249833106995
2018-12-19 04:49:38,523 - root - INFO - The train loss of epoch76-batch-100:0.09703141450881958
2018-12-19 04:50:42,082 - root - INFO - The train loss of epoch76-batch-150:0.0929039865732193
2018-12-19 04:51:45,815 - root - INFO - The train loss of epoch76-batch-200:0.13521058857440948
2018-12-19 04:52:49,791 - root - INFO - The train loss of epoch76-batch-250:0.0935131385922432
2018-12-19 04:53:53,752 - root - INFO - The train loss of epoch76-batch-300:0.11416308581829071
2018-12-19 04:54:57,351 - root - INFO - The train loss of epoch76-batch-350:0.11640869826078415
2018-12-19 04:55:23,241 - root - INFO - Epoch:76, train PA1:0.9602588039409209, MPA1:0.858578708108623, MIoU1:0.7844891325780283, FWIoU1:0.9263866111633812
2018-12-19 04:55:23,255 - root - INFO - validating one epoch...
2018-12-19 04:55:53,880 - root - INFO - Epoch:76, validation PA1:0.9550378626876626, MPA1:0.7742656015013969, MIoU1:0.6797237302434807, FWIoU1:0.9187742723689594
2018-12-19 04:55:53,880 - root - INFO - The average loss of val loss:0.13644070190687974
2018-12-19 04:55:53,888 - root - INFO - => The MIoU of val does't improve.
2018-12-19 04:55:53,889 - root - INFO - Training one epoch...
2018-12-19 04:55:57,907 - root - INFO - The train loss of epoch77-batch-0:0.11632166802883148
2018-12-19 04:57:01,961 - root - INFO - The train loss of epoch77-batch-50:0.11097978055477142
2018-12-19 04:58:05,475 - root - INFO - The train loss of epoch77-batch-100:0.1607583463191986
2018-12-19 04:59:08,968 - root - INFO - The train loss of epoch77-batch-150:0.0844198539853096
2018-12-19 05:00:12,416 - root - INFO - The train loss of epoch77-batch-200:0.09921317547559738
2018-12-19 05:01:16,099 - root - INFO - The train loss of epoch77-batch-250:0.13838213682174683
2018-12-19 05:02:19,641 - root - INFO - The train loss of epoch77-batch-300:0.11305578798055649
2018-12-19 05:03:22,876 - root - INFO - The train loss of epoch77-batch-350:0.10044190287590027
2018-12-19 05:03:48,843 - root - INFO - Epoch:77, train PA1:0.9600949634156087, MPA1:0.8589910099443685, MIoU1:0.7840285608998228, FWIoU1:0.926038256986495
2018-12-19 05:03:48,854 - root - INFO - validating one epoch...
2018-12-19 05:04:20,252 - root - INFO - Epoch:77, validation PA1:0.9547711664508097, MPA1:0.7725399808813717, MIoU1:0.6792156956679645, FWIoU1:0.9182365109156605
2018-12-19 05:04:20,252 - root - INFO - The average loss of val loss:0.13845280706882476
2018-12-19 05:04:20,260 - root - INFO - => The MIoU of val does't improve.
2018-12-19 05:04:20,262 - root - INFO - Training one epoch...
2018-12-19 05:04:24,206 - root - INFO - The train loss of epoch78-batch-0:0.10462100803852081
2018-12-19 05:05:27,653 - root - INFO - The train loss of epoch78-batch-50:0.10035853832960129
2018-12-19 05:06:31,240 - root - INFO - The train loss of epoch78-batch-100:0.18023456633090973
2018-12-19 05:07:34,808 - root - INFO - The train loss of epoch78-batch-150:0.11456046253442764
2018-12-19 05:08:38,604 - root - INFO - The train loss of epoch78-batch-200:0.11571931838989258
2018-12-19 05:09:42,585 - root - INFO - The train loss of epoch78-batch-250:0.11687614768743515
2018-12-19 05:10:45,895 - root - INFO - The train loss of epoch78-batch-300:0.11354868113994598
2018-12-19 05:11:49,032 - root - INFO - The train loss of epoch78-batch-350:0.10840679705142975
2018-12-19 05:12:14,250 - root - INFO - Epoch:78, train PA1:0.9589406703268886, MPA1:0.8588771884626292, MIoU1:0.7832676803367603, FWIoU1:0.9241155572369407
2018-12-19 05:12:14,260 - root - INFO - validating one epoch...
2018-12-19 05:12:46,512 - root - INFO - Epoch:78, validation PA1:0.9547140114222741, MPA1:0.7700587393519713, MIoU1:0.6792569386885028, FWIoU1:0.9181279040398898
2018-12-19 05:12:46,513 - root - INFO - The average loss of val loss:0.1369009766727686
2018-12-19 05:12:46,531 - root - INFO - => The MIoU of val does't improve.
2018-12-19 05:12:46,548 - root - INFO - Training one epoch...
2018-12-19 05:12:50,384 - root - INFO - The train loss of epoch79-batch-0:0.13497747480869293
2018-12-19 05:13:54,500 - root - INFO - The train loss of epoch79-batch-50:0.09171408414840698
2018-12-19 05:14:58,246 - root - INFO - The train loss of epoch79-batch-100:0.13470368087291718
2018-12-19 05:16:02,450 - root - INFO - The train loss of epoch79-batch-150:0.09711170196533203
2018-12-19 05:17:06,922 - root - INFO - The train loss of epoch79-batch-200:0.11977243423461914
2018-12-19 05:18:10,854 - root - INFO - The train loss of epoch79-batch-250:0.08900494873523712
2018-12-19 05:19:14,478 - root - INFO - The train loss of epoch79-batch-300:0.1047312319278717
2018-12-19 05:20:18,408 - root - INFO - The train loss of epoch79-batch-350:0.12512324750423431
2018-12-19 05:20:44,008 - root - INFO - Epoch:79, train PA1:0.9600716423509743, MPA1:0.8571271758624979, MIoU1:0.7826783696081059, FWIoU1:0.9260328387598817
2018-12-19 05:20:44,034 - root - INFO - validating one epoch...
2018-12-19 05:21:17,415 - root - INFO - Epoch:79, validation PA1:0.9549441473200858, MPA1:0.7753876229985835, MIoU1:0.6822173651763185, FWIoU1:0.918590013724752
2018-12-19 05:21:17,417 - root - INFO - The average loss of val loss:0.13631731321414312
2018-12-19 05:21:17,452 - root - INFO - => The MIoU of val does't improve.
2018-12-19 05:21:17,455 - root - INFO - Training one epoch...
2018-12-19 05:21:21,885 - root - INFO - The train loss of epoch80-batch-0:0.07470103353261948
2018-12-19 05:22:25,891 - root - INFO - The train loss of epoch80-batch-50:0.11494012922048569
2018-12-19 05:23:29,577 - root - INFO - The train loss of epoch80-batch-100:0.10376144200563431
2018-12-19 05:24:33,137 - root - INFO - The train loss of epoch80-batch-150:0.1282859444618225
2018-12-19 05:25:37,014 - root - INFO - The train loss of epoch80-batch-200:0.09240885823965073
2018-12-19 05:26:40,719 - root - INFO - The train loss of epoch80-batch-250:0.14555643498897552
2018-12-19 05:27:44,130 - root - INFO - The train loss of epoch80-batch-300:0.07796740531921387
2018-12-19 05:28:47,077 - root - INFO - The train loss of epoch80-batch-350:0.1338871866464615
2018-12-19 05:29:12,675 - root - INFO - Epoch:80, train PA1:0.960404015835341, MPA1:0.8597161731054243, MIoU1:0.7849944881184294, FWIoU1:0.926682433172048
2018-12-19 05:29:12,686 - root - INFO - validating one epoch...
2018-12-19 05:29:44,044 - root - INFO - Epoch:80, validation PA1:0.9546706799777404, MPA1:0.7769070199055423, MIoU1:0.6781781201728705, FWIoU1:0.9184429904893292
2018-12-19 05:29:44,045 - root - INFO - The average loss of val loss:0.13656764763096968
2018-12-19 05:29:44,068 - root - INFO - => The MIoU of val does't improve.
2018-12-19 05:29:44,070 - root - INFO - Training one epoch...
2018-12-19 05:29:48,284 - root - INFO - The train loss of epoch81-batch-0:0.12056392431259155
2018-12-19 05:30:52,081 - root - INFO - The train loss of epoch81-batch-50:0.09953363239765167
2018-12-19 05:31:55,741 - root - INFO - The train loss of epoch81-batch-100:0.11027940362691879
2018-12-19 05:32:59,537 - root - INFO - The train loss of epoch81-batch-150:0.13119599223136902
2018-12-19 05:34:03,320 - root - INFO - The train loss of epoch81-batch-200:0.09684700518846512
2018-12-19 05:35:07,159 - root - INFO - The train loss of epoch81-batch-250:0.08915916830301285
2018-12-19 05:36:11,533 - root - INFO - The train loss of epoch81-batch-300:0.1406916081905365
2018-12-19 05:37:15,236 - root - INFO - The train loss of epoch81-batch-350:0.07652170956134796
2018-12-19 05:37:41,163 - root - INFO - Epoch:81, train PA1:0.9604993772257508, MPA1:0.8635728090035956, MIoU1:0.7902056582414273, FWIoU1:0.9267893771543456
2018-12-19 05:37:41,178 - root - INFO - validating one epoch...
2018-12-19 05:38:12,557 - root - INFO - Epoch:81, validation PA1:0.9542873410107443, MPA1:0.7734535640847792, MIoU1:0.6763371567909079, FWIoU1:0.917603532319671
2018-12-19 05:38:12,558 - root - INFO - The average loss of val loss:0.13781540803611278
2018-12-19 05:38:12,564 - root - INFO - => The MIoU of val does't improve.
2018-12-19 05:38:12,565 - root - INFO - Training one epoch...
2018-12-19 05:38:17,110 - root - INFO - The train loss of epoch82-batch-0:0.11019560694694519
2018-12-19 05:39:20,536 - root - INFO - The train loss of epoch82-batch-50:0.11159098893404007
2018-12-19 05:40:24,333 - root - INFO - The train loss of epoch82-batch-100:0.09971534460783005
2018-12-19 05:41:27,983 - root - INFO - The train loss of epoch82-batch-150:0.09462369233369827
2018-12-19 05:42:31,771 - root - INFO - The train loss of epoch82-batch-200:0.07996267825365067
2018-12-19 05:43:35,663 - root - INFO - The train loss of epoch82-batch-250:0.10794143378734589
2018-12-19 05:44:39,534 - root - INFO - The train loss of epoch82-batch-300:0.12561067938804626
2018-12-19 05:45:43,168 - root - INFO - The train loss of epoch82-batch-350:0.12284119427204132
2018-12-19 05:46:09,243 - root - INFO - Epoch:82, train PA1:0.9612795668163888, MPA1:0.8632029919315117, MIoU1:0.7908988500767133, FWIoU1:0.9281515930399309
2018-12-19 05:46:09,255 - root - INFO - validating one epoch...
2018-12-19 05:46:40,713 - root - INFO - Epoch:82, validation PA1:0.9546372873542716, MPA1:0.7801098740277815, MIoU1:0.6813068155065404, FWIoU1:0.9182417488142534
2018-12-19 05:46:40,713 - root - INFO - The average loss of val loss:0.1370621954401334
2018-12-19 05:46:40,733 - root - INFO - => The MIoU of val does't improve.
2018-12-19 05:46:40,734 - root - INFO - Training one epoch...
2018-12-19 05:46:44,761 - root - INFO - The train loss of epoch83-batch-0:0.09788555651903152
2018-12-19 05:47:48,294 - root - INFO - The train loss of epoch83-batch-50:0.11844812333583832
2018-12-19 05:48:52,257 - root - INFO - The train loss of epoch83-batch-100:0.14496257901191711
2018-12-19 05:49:56,379 - root - INFO - The train loss of epoch83-batch-150:0.11570249497890472
2018-12-19 05:51:00,225 - root - INFO - The train loss of epoch83-batch-200:0.11229199916124344
2018-12-19 05:52:04,179 - root - INFO - The train loss of epoch83-batch-250:0.13197249174118042
2018-12-19 05:53:07,954 - root - INFO - The train loss of epoch83-batch-300:0.11583119630813599
2018-12-19 05:54:12,016 - root - INFO - The train loss of epoch83-batch-350:0.1311873495578766
2018-12-19 05:54:37,912 - root - INFO - Epoch:83, train PA1:0.9600803414400334, MPA1:0.8578660071082946, MIoU1:0.784373875337867, FWIoU1:0.9260366156744628
2018-12-19 05:54:37,925 - root - INFO - validating one epoch...
2018-12-19 05:55:10,500 - root - INFO - Epoch:83, validation PA1:0.9550856507177149, MPA1:0.7790146360373147, MIoU1:0.6823831923232895, FWIoU1:0.9189137451375091
2018-12-19 05:55:10,501 - root - INFO - The average loss of val loss:0.13505003626147907
2018-12-19 05:55:10,518 - root - INFO - => The MIoU of val does't improve.
......
2018-12-19 08:27:43,718 - root - INFO - The average loss of val loss:0.13424419648945332
2018-12-19 08:27:43,728 - root - INFO - => The MIoU of val does't improve.
2018-12-19 08:27:43,730 - root - INFO - Training one epoch...
2018-12-19 08:27:48,596 - root - INFO - The train loss of epoch102-batch-0:0.16259847581386566
2018-12-19 08:28:52,317 - root - INFO - The train loss of epoch102-batch-50:0.11844433099031448
2018-12-19 08:29:55,878 - root - INFO - The train loss of epoch102-batch-100:0.08731412142515182
2018-12-19 08:30:59,421 - root - INFO - The train loss of epoch102-batch-150:0.11711011081933975
2018-12-19 08:32:03,165 - root - INFO - The train loss of epoch102-batch-200:0.0976969376206398
2018-12-19 08:33:06,733 - root - INFO - The train loss of epoch102-batch-250:0.1077752485871315
2018-12-19 08:34:10,510 - root - INFO - The train loss of epoch102-batch-300:0.09791499376296997
2018-12-19 08:35:14,329 - root - INFO - The train loss of epoch102-batch-350:0.09872373193502426
2018-12-19 08:35:39,627 - root - INFO - Epoch:102, train PA1:0.9622347269596885, MPA1:0.8652447951618635, MIoU1:0.7937448105097558, FWIoU1:0.9298358482890512
2018-12-19 08:35:39,639 - root - INFO - validating one epoch...
2018-12-19 08:36:12,506 - root - INFO - Epoch:102, validation PA1:0.9558006649352665, MPA1:0.7837943481156863, MIoU1:0.6875621224048521, FWIoU1:0.9200791370373776
2018-12-19 08:36:12,507 - root - INFO - The average loss of val loss:0.13389485627412795
2018-12-19 08:36:12,538 - root - INFO - => The MIoU of val does't improve.
2018-12-19 08:36:12,554 - root - INFO - Training one epoch...
2018-12-19 08:36:16,816 - root - INFO - The train loss of epoch103-batch-0:0.12918095290660858
2018-12-19 08:37:20,479 - root - INFO - The train loss of epoch103-batch-50:0.1138860210776329
2018-12-19 08:38:23,994 - root - INFO - The train loss of epoch103-batch-100:0.09629619121551514
2018-12-19 08:39:27,396 - root - INFO - The train loss of epoch103-batch-150:0.09868337213993073
2018-12-19 08:40:31,371 - root - INFO - The train loss of epoch103-batch-200:0.11681150645017624
2018-12-19 08:41:35,635 - root - INFO - The train loss of epoch103-batch-250:0.11032979935407639
2018-12-19 08:42:39,819 - root - INFO - The train loss of epoch103-batch-300:0.10607818514108658
2018-12-19 08:43:43,449 - root - INFO - The train loss of epoch103-batch-350:0.09646876901388168
2018-12-19 08:44:09,579 - root - INFO - Epoch:103, train PA1:0.9623601082636585, MPA1:0.8691767306474915, MIoU1:0.7980776629097539, FWIoU1:0.9299536353610821
2018-12-19 08:44:09,590 - root - INFO - validating one epoch...
2018-12-19 08:44:41,748 - root - INFO - Epoch:103, validation PA1:0.9558786960263133, MPA1:0.7836577095652535, MIoU1:0.6895454297106763, FWIoU1:0.9201620575892111
2018-12-19 08:44:41,749 - root - INFO - The average loss of val loss:0.13376205302774907
2018-12-19 08:44:41,773 - root - INFO - =>saving a new best checkpoint...
2018-12-19 08:44:44,056 - root - INFO - Training one epoch...
2018-12-19 08:44:48,010 - root - INFO - The train loss of epoch104-batch-0:0.09845012426376343
2018-12-19 08:45:52,322 - root - INFO - The train loss of epoch104-batch-50:0.08688458800315857
2018-12-19 08:46:56,310 - root - INFO - The train loss of epoch104-batch-100:0.1985970288515091
2018-12-19 08:48:00,313 - root - INFO - The train loss of epoch104-batch-150:0.07477502524852753
2018-12-19 08:49:04,149 - root - INFO - The train loss of epoch104-batch-200:0.09265899658203125
2018-12-19 08:50:07,771 - root - INFO - The train loss of epoch104-batch-250:0.10612960904836655
2018-12-19 08:51:12,009 - root - INFO - The train loss of epoch104-batch-300:0.09568526595830917
2018-12-19 08:52:16,450 - root - INFO - The train loss of epoch104-batch-350:0.11180691421031952
2018-12-19 08:52:42,739 - root - INFO - Epoch:104, train PA1:0.9624056338143443, MPA1:0.8700762704721066, MIoU1:0.7988993423020223, FWIoU1:0.9301021481662561
2018-12-19 08:52:42,753 - root - INFO - validating one epoch...
2018-12-19 08:53:14,601 - root - INFO - Epoch:104, validation PA1:0.9559877961443056, MPA1:0.7835694655277888, MIoU1:0.6888431173930902, FWIoU1:0.9203307511045015
2018-12-19 08:53:14,602 - root - INFO - The average loss of val loss:0.13367216947178046
2018-12-19 08:53:14,638 - root - INFO - => The MIoU of val does't improve.
2018-12-19 08:53:14,639 - root - INFO - Training one epoch...
2018-12-19 08:53:19,732 - root - INFO - The train loss of epoch105-batch-0:0.1273394227027893
2018-12-19 08:54:23,566 - root - INFO - The train loss of epoch105-batch-50:0.0918409675359726
2018-12-19 08:55:27,746 - root - INFO - The train loss of epoch105-batch-100:0.08912001550197601
2018-12-19 08:56:31,249 - root - INFO - The train loss of epoch105-batch-150:0.09642244130373001
2018-12-19 08:57:35,236 - root - INFO - The train loss of epoch105-batch-200:0.13284264504909515
2018-12-19 08:58:38,720 - root - INFO - The train loss of epoch105-batch-250:0.11148284375667572
2018-12-19 08:59:42,527 - root - INFO - The train loss of epoch105-batch-300:0.10422176122665405
2018-12-19 09:00:46,650 - root - INFO - The train loss of epoch105-batch-350:0.12179043889045715
2018-12-19 09:01:12,491 - root - INFO - Epoch:105, train PA1:0.962982038166963, MPA1:0.8668956286291922, MIoU1:0.7957721629089416, FWIoU1:0.9311322002779817
2018-12-19 09:01:12,502 - root - INFO - validating one epoch...
2018-12-19 09:01:44,023 - root - INFO - Epoch:105, validation PA1:0.955874293900088, MPA1:0.7806812153330136, MIoU1:0.6873329009502055, FWIoU1:0.9201099681308024
2018-12-19 09:01:44,024 - root - INFO - The average loss of val loss:0.13375133474667866
2018-12-19 09:01:44,036 - root - INFO - => The MIoU of val does't improve.
2018-12-19 09:01:44,049 - root - INFO - Training one epoch...
2018-12-19 09:01:48,614 - root - INFO - The train loss of epoch106-batch-0:0.10505063086748123
2018-12-19 09:02:52,984 - root - INFO - The train loss of epoch106-batch-50:0.11590459197759628
2018-12-19 09:03:57,030 - root - INFO - The train loss of epoch106-batch-100:0.14203916490077972
2018-12-19 09:05:00,940 - root - INFO - The train loss of epoch106-batch-150:0.1017666608095169
2018-12-19 09:06:04,729 - root - INFO - The train loss of epoch106-batch-200:0.14008231461048126
2018-12-19 09:07:08,806 - root - INFO - The train loss of epoch106-batch-250:0.10215319693088531
2018-12-19 09:08:12,291 - root - INFO - The train loss of epoch106-batch-300:0.10551431030035019
2018-12-19 09:09:15,585 - root - INFO - The train loss of epoch106-batch-350:0.07625865936279297
2018-12-19 09:09:41,110 - root - INFO - Epoch:106, train PA1:0.9623827261882071, MPA1:0.8693790235098815, MIoU1:0.7995492384414924, FWIoU1:0.9300167564887022
2018-12-19 09:09:41,124 - root - INFO - validating one epoch...
2018-12-19 09:10:13,319 - root - INFO - Epoch:106, validation PA1:0.9558681036937261, MPA1:0.7817664467309805, MIoU1:0.6875891131510206, FWIoU1:0.9201252156414733
2018-12-19 09:10:13,320 - root - INFO - The average loss of val loss:0.1337807466586431
2018-12-19 09:10:13,329 - root - INFO - => The MIoU of val does't improve.
2018-12-19 09:10:13,331 - root - INFO - Training one epoch...
2018-12-19 09:10:17,378 - root - INFO - The train loss of epoch107-batch-0:0.10920165479183197
2018-12-19 09:11:20,568 - root - INFO - The train loss of epoch107-batch-50:0.11619025468826294
2018-12-19 09:12:23,893 - root - INFO - The train loss of epoch107-batch-100:0.1096523180603981
2018-12-19 09:13:26,924 - root - INFO - The train loss of epoch107-batch-150:0.11286712437868118
2018-12-19 09:14:30,214 - root - INFO - The train loss of epoch107-batch-200:0.0950569212436676
2018-12-19 09:15:33,515 - root - INFO - The train loss of epoch107-batch-250:0.08993569016456604
2018-12-19 09:16:36,876 - root - INFO - The train loss of epoch107-batch-300:0.09000883996486664
2018-12-19 09:16:39,663 - root - INFO - iteration arrive 40000!
2018-12-19 09:16:39,667 - root - INFO - Epoch:107, train PA1:0.9623345949803918, MPA1:0.8676838667949045, MIoU1:0.7958796920592547, FWIoU1:0.929970167313285
2018-12-19 09:16:39,702 - root - INFO - validating one epoch...
2018-12-19 09:17:11,020 - root - INFO - Epoch:107, validation PA1:0.9558596715797815, MPA1:0.7830658518763322, MIoU1:0.6885688670955664, FWIoU1:0.9201202301308964
2018-12-19 09:17:11,021 - root - INFO - The average loss of val loss:0.1333686277270317
2018-12-19 09:17:11,030 - root - INFO - => The MIoU of val does't improve.
2018-12-19 09:17:11,037 - root - INFO - =>saving the final checkpoint...
- by the way you can modify this train scripts:
cityscapes_train.sh
:
#!/bin/bash
nvidia-smi
# pytorch 04
PYTHON="python"
#dataset config
DATASET="cityscapes"
#training setting
BN_MOMENTUM=0.1
GPU="0,1"
LOSS_WEIGHT_FILE='cityscapes_classes_weights_log.npy'
#training settings stage1
LEARNING_RATE_STAGE1=0.007
FREZEE_BN_STAGE1=False
OUTPUT_STRIDE_STAGE1=16
#training settings stage2
LEARNING_RATE_STAGE2=0.001
FREZEE_BN_STAGE2=True
PRETRAINED_CKPT_FILE="resnet101_16_iamgenet_pre-True_ckpt_file-None_loss_weight_file-None_batch_size-8_base_size-513_crop_size-513_split-train_lr-0.007_iter_max-40000best.pth"
OUTPUT_STRIDE_STAGE2=8
IMAGENET_TRAINED_STAGE2=False
################################################################
# Training
################################################################
# $PYTHON -u train_cityscapes.py \
# --gpu $GPU \
# --freeze_bn $FREZEE_BN_STAGE1 \
# --bn_momentum $BN_MOMENTUM \
# --lr $LEARNING_RATE_STAGE1 \
# --output_stride $OUTPUT_STRIDE_STAGE1 \
# --iter_max 40000 \
# --batch_size_per_gpu 4
$PYTHON -u train_cityscapes.py \
--gpu $GPU \
--pretrained_ckpt_file $PRETRAINED_CKPT_FILE
--freeze_bn $FREZEE_BN_STAGE2 \
--bn_momentum $BN_MOMENTUM \
--imagenet_pretrained $IMAGENET_TRAINED_STAGE2 \
--lr $LEARNING_RATE_STAGE2 \
--output_stride $OUTPUT_STRIDE_STAGE2 \
--iter_max 20000 \
--batch_size_per_gpu 4
@ranjiewwen thanks very much ! I just run this code,after 162 epochs, val miou is 0.69,how about yours?I am going to test this afternoon, once get results I will update here.
@ranjiewwen sorry to disturb you again , I just want to know have you run test.py successfully? I run this code but keep meeting this error:
ImportError: cannot import name 'cfg'
and I got nothing in the Results and Preweights directory, do you know how to solve this? Hope for your apply!