chinese_ocr
chinese_ocr copied to clipboard
acc为0 咨询!
感谢楼主分享这么棒的项目。在用text_render生成图片训练过程中loss 一直在一个较大的数震荡。acc为0 不知道有哪些参数需要调整。图片大小 32*280,训练 20万张. 主要是用来识别 0-9,A-Z的不定长字符串。
请问你的问题解决了吗?我也遇到跟你一样的问题,我很奇怪,训练自己不定长的数据,acc一直为0,但是训练作者提供链接的中文合成数据集的时候acc就很正常
你好: 没有解决。作者说我可能是图片质量问题,让我传几张图片,现在还没反馈。有人也遇到同样情况 反馈 增大 lr和 steps 解决了问题 你可以试试。
在 2019-02-18 15:37:47,"Govan111" [email protected] 写道:
请问你的问题解决了吗?我也遇到跟你一样的问题,我很奇怪,训练自己不定长的数据,acc一直为0,但是训练作者提供链接的中文合成数据集的时候acc就很正常
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
我在训练时也遇到了这个问题,有什么建议?
增加lr或者steps试试.
在 2019-03-05 12:25:01,"peili-liang" [email protected] 写道:
我在训练时也遇到了这个问题,有什么建议?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
增加lr或者steps试试. 在 2019-03-05 12:25:01,"peili-liang" [email protected] 写道: 我在训练时也遇到了这个问题,有什么建议? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
我试了也没有变化,难道与训练集大小有关?
有可能 我训练集也不大.
在 2019-03-05 16:37:02,"peili-liang" [email protected] 写道:
增加lr或者steps试试. 在 2019-03-05 12:25:01,"peili-liang" [email protected] 写道: 我在训练时也遇到了这个问题,有什么建议? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
我试了也没有变化,难道与训练集大小有关?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
当训练集比较小的时候,尤其是重新训练的话,如果以作者原先的lr,确实容易出现acc一直为0的情况 我的训练样本700个,测试样本300个,lr初始值 0.0005,训练时acc一直为0 当lr提高10倍,lr初始值0.005时,一开始acc也是0,不过随着训练次数的增加,acc不再为0 如果训练不是从头开始,而是基于已经训练过的模型,则能够缩短acc为0的时间 还有就是注意观察loss,如果loss大于10,则很有可能acc为0 当loss小于8的时候,acc不再为0
D:\develop\Anaconda3\envs\alpr-unconstrained-master\python.exe D:/data/project/CTPN_CTC/train/train_lp.py Using TensorFlow backend. 2019-04-22 16:06:40.011247: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 2019-04-22 16:06:40.305090: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1060 3GB major: 6 minor: 1 memoryClockRate(GHz): 1.7085 pciBusID: 0000:01:00.0 totalMemory: 3.00GiB freeMemory: 2.42GiB 2019-04-22 16:06:40.305320: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-04-22 16:06:40.618893: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-04-22 16:06:40.619024: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-04-22 16:06:40.619099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-04-22 16:06:40.619259: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3072 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 3GB, pci bus id: 0000:01:00.0, compute capability: 6.1) 2019-04-22 16:06:40.656595: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 3.00G (3221225472 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2019-04-22 16:06:40.656757: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 2.70G (2899102720 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2019-04-22 16:06:40.656914: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 2.43G (2609192448 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
Layer (type) Output Shape Param # Connected to
the_input (InputLayer) (None, 128, None, 1) 0
conv2d_1 (Conv2D) (None, 64, None, 64) 1600 the_input[0][0]
batch_normalization_1 (BatchNor (None, 64, None, 64) 256 conv2d_1[0][0]
activation_1 (Activation) (None, 64, None, 64) 0 batch_normalization_1[0][0]
conv2d_2 (Conv2D) (None, 64, None, 8) 4616 activation_1[0][0]
concatenate_1 (Concatenate) (None, 64, None, 72) 0 conv2d_1[0][0]
conv2d_2[0][0]
batch_normalization_2 (BatchNor (None, 64, None, 72) 288 concatenate_1[0][0]
activation_2 (Activation) (None, 64, None, 72) 0 batch_normalization_2[0][0]
conv2d_3 (Conv2D) (None, 64, None, 8) 5192 activation_2[0][0]
concatenate_2 (Concatenate) (None, 64, None, 80) 0 concatenate_1[0][0]
conv2d_3[0][0]
batch_normalization_3 (BatchNor (None, 64, None, 80) 320 concatenate_2[0][0]
activation_3 (Activation) (None, 64, None, 80) 0 batch_normalization_3[0][0]
conv2d_4 (Conv2D) (None, 64, None, 8) 5768 activation_3[0][0]
concatenate_3 (Concatenate) (None, 64, None, 88) 0 concatenate_2[0][0]
conv2d_4[0][0]
batch_normalization_4 (BatchNor (None, 64, None, 88) 352 concatenate_3[0][0]
activation_4 (Activation) (None, 64, None, 88) 0 batch_normalization_4[0][0]
conv2d_5 (Conv2D) (None, 64, None, 8) 6344 activation_4[0][0]
concatenate_4 (Concatenate) (None, 64, None, 96) 0 concatenate_3[0][0]
conv2d_5[0][0]
batch_normalization_5 (BatchNor (None, 64, None, 96) 384 concatenate_4[0][0]
activation_5 (Activation) (None, 64, None, 96) 0 batch_normalization_5[0][0]
conv2d_6 (Conv2D) (None, 64, None, 8) 6920 activation_5[0][0]
concatenate_5 (Concatenate) (None, 64, None, 104 0 concatenate_4[0][0]
conv2d_6[0][0]
batch_normalization_6 (BatchNor (None, 64, None, 104 416 concatenate_5[0][0]
activation_6 (Activation) (None, 64, None, 104 0 batch_normalization_6[0][0]
conv2d_7 (Conv2D) (None, 64, None, 8) 7496 activation_6[0][0]
concatenate_6 (Concatenate) (None, 64, None, 112 0 concatenate_5[0][0]
conv2d_7[0][0]
batch_normalization_7 (BatchNor (None, 64, None, 112 448 concatenate_6[0][0]
activation_7 (Activation) (None, 64, None, 112 0 batch_normalization_7[0][0]
conv2d_8 (Conv2D) (None, 64, None, 8) 8072 activation_7[0][0]
concatenate_7 (Concatenate) (None, 64, None, 120 0 concatenate_6[0][0]
conv2d_8[0][0]
batch_normalization_8 (BatchNor (None, 64, None, 120 480 concatenate_7[0][0]
activation_8 (Activation) (None, 64, None, 120 0 batch_normalization_8[0][0]
conv2d_9 (Conv2D) (None, 64, None, 8) 8648 activation_8[0][0]
concatenate_8 (Concatenate) (None, 64, None, 128 0 concatenate_7[0][0]
conv2d_9[0][0]
batch_normalization_9 (BatchNor (None, 64, None, 128 512 concatenate_8[0][0]
activation_9 (Activation) (None, 64, None, 128 0 batch_normalization_9[0][0]
conv2d_10 (Conv2D) (None, 64, None, 128 16384 activation_9[0][0]
dropout_1 (Dropout) (None, 64, None, 128 0 conv2d_10[0][0]
average_pooling2d_1 (AveragePoo (None, 32, None, 128 0 dropout_1[0][0]
batch_normalization_10 (BatchNo (None, 32, None, 128 512 average_pooling2d_1[0][0]
activation_10 (Activation) (None, 32, None, 128 0 batch_normalization_10[0][0]
conv2d_11 (Conv2D) (None, 32, None, 8) 9224 activation_10[0][0]
concatenate_9 (Concatenate) (None, 32, None, 136 0 average_pooling2d_1[0][0]
conv2d_11[0][0]
batch_normalization_11 (BatchNo (None, 32, None, 136 544 concatenate_9[0][0]
activation_11 (Activation) (None, 32, None, 136 0 batch_normalization_11[0][0]
conv2d_12 (Conv2D) (None, 32, None, 8) 9800 activation_11[0][0]
concatenate_10 (Concatenate) (None, 32, None, 144 0 concatenate_9[0][0]
conv2d_12[0][0]
batch_normalization_12 (BatchNo (None, 32, None, 144 576 concatenate_10[0][0]
activation_12 (Activation) (None, 32, None, 144 0 batch_normalization_12[0][0]
conv2d_13 (Conv2D) (None, 32, None, 8) 10376 activation_12[0][0]
concatenate_11 (Concatenate) (None, 32, None, 152 0 concatenate_10[0][0]
conv2d_13[0][0]
batch_normalization_13 (BatchNo (None, 32, None, 152 608 concatenate_11[0][0]
activation_13 (Activation) (None, 32, None, 152 0 batch_normalization_13[0][0]
conv2d_14 (Conv2D) (None, 32, None, 8) 10952 activation_13[0][0]
concatenate_12 (Concatenate) (None, 32, None, 160 0 concatenate_11[0][0]
conv2d_14[0][0]
batch_normalization_14 (BatchNo (None, 32, None, 160 640 concatenate_12[0][0]
activation_14 (Activation) (None, 32, None, 160 0 batch_normalization_14[0][0]
conv2d_15 (Conv2D) (None, 32, None, 8) 11528 activation_14[0][0]
concatenate_13 (Concatenate) (None, 32, None, 168 0 concatenate_12[0][0]
conv2d_15[0][0]
batch_normalization_15 (BatchNo (None, 32, None, 168 672 concatenate_13[0][0]
activation_15 (Activation) (None, 32, None, 168 0 batch_normalization_15[0][0]
conv2d_16 (Conv2D) (None, 32, None, 8) 12104 activation_15[0][0]
concatenate_14 (Concatenate) (None, 32, None, 176 0 concatenate_13[0][0]
conv2d_16[0][0]
batch_normalization_16 (BatchNo (None, 32, None, 176 704 concatenate_14[0][0]
activation_16 (Activation) (None, 32, None, 176 0 batch_normalization_16[0][0]
conv2d_17 (Conv2D) (None, 32, None, 8) 12680 activation_16[0][0]
concatenate_15 (Concatenate) (None, 32, None, 184 0 concatenate_14[0][0]
conv2d_17[0][0]
batch_normalization_17 (BatchNo (None, 32, None, 184 736 concatenate_15[0][0]
activation_17 (Activation) (None, 32, None, 184 0 batch_normalization_17[0][0]
conv2d_18 (Conv2D) (None, 32, None, 8) 13256 activation_17[0][0]
concatenate_16 (Concatenate) (None, 32, None, 192 0 concatenate_15[0][0]
conv2d_18[0][0]
batch_normalization_18 (BatchNo (None, 32, None, 192 768 concatenate_16[0][0]
activation_18 (Activation) (None, 32, None, 192 0 batch_normalization_18[0][0]
conv2d_19 (Conv2D) (None, 32, None, 128 24576 activation_18[0][0]
dropout_2 (Dropout) (None, 32, None, 128 0 conv2d_19[0][0]
average_pooling2d_2 (AveragePoo (None, 16, None, 128 0 dropout_2[0][0]
batch_normalization_19 (BatchNo (None, 16, None, 128 512 average_pooling2d_2[0][0]
activation_19 (Activation) (None, 16, None, 128 0 batch_normalization_19[0][0]
conv2d_20 (Conv2D) (None, 16, None, 8) 9224 activation_19[0][0]
concatenate_17 (Concatenate) (None, 16, None, 136 0 average_pooling2d_2[0][0]
conv2d_20[0][0]
batch_normalization_20 (BatchNo (None, 16, None, 136 544 concatenate_17[0][0]
activation_20 (Activation) (None, 16, None, 136 0 batch_normalization_20[0][0]
conv2d_21 (Conv2D) (None, 16, None, 8) 9800 activation_20[0][0]
concatenate_18 (Concatenate) (None, 16, None, 144 0 concatenate_17[0][0]
conv2d_21[0][0]
batch_normalization_21 (BatchNo (None, 16, None, 144 576 concatenate_18[0][0]
activation_21 (Activation) (None, 16, None, 144 0 batch_normalization_21[0][0]
conv2d_22 (Conv2D) (None, 16, None, 8) 10376 activation_21[0][0]
concatenate_19 (Concatenate) (None, 16, None, 152 0 concatenate_18[0][0]
conv2d_22[0][0]
batch_normalization_22 (BatchNo (None, 16, None, 152 608 concatenate_19[0][0]
activation_22 (Activation) (None, 16, None, 152 0 batch_normalization_22[0][0]
conv2d_23 (Conv2D) (None, 16, None, 8) 10952 activation_22[0][0]
concatenate_20 (Concatenate) (None, 16, None, 160 0 concatenate_19[0][0]
conv2d_23[0][0]
batch_normalization_23 (BatchNo (None, 16, None, 160 640 concatenate_20[0][0]
activation_23 (Activation) (None, 16, None, 160 0 batch_normalization_23[0][0]
conv2d_24 (Conv2D) (None, 16, None, 8) 11528 activation_23[0][0]
concatenate_21 (Concatenate) (None, 16, None, 168 0 concatenate_20[0][0]
conv2d_24[0][0]
batch_normalization_24 (BatchNo (None, 16, None, 168 672 concatenate_21[0][0]
activation_24 (Activation) (None, 16, None, 168 0 batch_normalization_24[0][0]
conv2d_25 (Conv2D) (None, 16, None, 8) 12104 activation_24[0][0]
concatenate_22 (Concatenate) (None, 16, None, 176 0 concatenate_21[0][0]
conv2d_25[0][0]
batch_normalization_25 (BatchNo (None, 16, None, 176 704 concatenate_22[0][0]
activation_25 (Activation) (None, 16, None, 176 0 batch_normalization_25[0][0]
conv2d_26 (Conv2D) (None, 16, None, 8) 12680 activation_25[0][0]
concatenate_23 (Concatenate) (None, 16, None, 184 0 concatenate_22[0][0]
conv2d_26[0][0]
batch_normalization_26 (BatchNo (None, 16, None, 184 736 concatenate_23[0][0]
activation_26 (Activation) (None, 16, None, 184 0 batch_normalization_26[0][0]
conv2d_27 (Conv2D) (None, 16, None, 8) 13256 activation_26[0][0]
concatenate_24 (Concatenate) (None, 16, None, 192 0 concatenate_23[0][0]
conv2d_27[0][0]
batch_normalization_27 (BatchNo (None, 16, None, 192 768 concatenate_24[0][0]
activation_27 (Activation) (None, 16, None, 192 0 batch_normalization_27[0][0]
permute (Permute) (None, None, 16, 192 0 activation_27[0][0]
flatten (TimeDistributed) (None, None, 3072) 0 permute[0][0]
out (Dense) (None, None, 95) 291935 flatten[0][0]
Total params: 582,367 Trainable params: 574,879 Non-trainable params: 7,488
Loading model weights... done! -----------Start training----------- Epoch 1/10 2019-04-22 16:07:03.991155: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.24GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-04-22 16:07:04.094331: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-04-22 16:07:04.097108: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.17GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
1/87 [..............................] - ETA: 7:37 - loss: 10.8860 - acc: 0.0000e+00 2/87 [..............................] - ETA: 3:52 - loss: 19.2407 - acc: 0.0000e+00 3/87 [>.............................] - ETA: 2:37 - loss: 19.0398 - acc: 0.0000e+00 4/87 [>.............................] - ETA: 1:59 - loss: 19.6011 - acc: 0.0000e+00 5/87 [>.............................] - ETA: 1:36 - loss: 19.1960 - acc: 0.0000e+00 6/87 [=>............................] - ETA: 1:21 - loss: 19.1496 - acc: 0.0000e+00 7/87 [=>............................] - ETA: 1:10 - loss: 18.6608 - acc: 0.0000e+00 8/87 [=>............................] - ETA: 1:02 - loss: 18.6730 - acc: 0.0000e+00 9/87 [==>...........................] - ETA: 56s - loss: 18.7139 - acc: 0.0000e+00 10/87 [==>...........................] - ETA: 51s - loss: 18.4229 - acc: 0.0000e+00 11/87 [==>...........................] - ETA: 46s - loss: 18.0890 - acc: 0.0000e+00 12/87 [===>..........................] - ETA: 43s - loss: 18.0737 - acc: 0.0000e+00 13/87 [===>..........................] - ETA: 40s - loss: 17.9026 - acc: 0.0000e+00 14/87 [===>..........................] - ETA: 37s - loss: 17.9883 - acc: 0.0000e+00 15/87 [====>.........................] - ETA: 35s - loss: 17.7085 - acc: 0.0000e+00 16/87 [====>.........................] - ETA: 33s - loss: 17.4988 - acc: 0.0000e+00 17/87 [====>.........................] - ETA: 31s - loss: 17.2275 - acc: 0.0000e+00 18/87 [=====>........................] - ETA: 29s - loss: 17.0234 - acc: 0.0000e+00 19/87 [=====>........................] - ETA: 28s - loss: 16.9930 - acc: 0.0000e+00 20/87 [=====>........................] - ETA: 27s - loss: 17.0790 - acc: 0.0000e+00 21/87 [======>.......................] - ETA: 25s - loss: 17.1779 - acc: 0.0000e+00 22/87 [======>.......................] - ETA: 24s - loss: 17.1385 - acc: 0.0000e+00 23/87 [======>.......................] - ETA: 23s - loss: 16.8586 - acc: 0.0000e+00 24/87 [=======>......................] - ETA: 22s - loss: 16.8368 - acc: 0.0000e+00 25/87 [=======>......................] - ETA: 21s - loss: 16.7841 - acc: 0.0000e+00 26/87 [=======>......................] - ETA: 21s - loss: 16.8199 - acc: 0.0000e+00 27/87 [========>.....................] - ETA: 20s - loss: 16.7552 - acc: 0.0000e+00 28/87 [========>.....................] - ETA: 19s - loss: 16.7901 - acc: 0.0000e+00 29/87 [=========>....................] - ETA: 18s - loss: 16.7129 - acc: 0.0000e+00 30/87 [=========>....................] - ETA: 18s - loss: 16.7074 - acc: 0.0000e+00 31/87 [=========>....................] - ETA: 17s - loss: 16.6222 - acc: 0.0000e+00 32/87 [==========>...................] - ETA: 16s - loss: 16.4786 - acc: 0.0000e+00 33/87 [==========>...................] - ETA: 16s - loss: 16.3815 - acc: 0.0000e+00 34/87 [==========>...................] - ETA: 15s - loss: 16.3516 - acc: 0.0000e+00 35/87 [===========>..................] - ETA: 15s - loss: 16.2946 - acc: 0.0000e+00 36/87 [===========>..................] - ETA: 14s - loss: 16.2149 - acc: 0.0000e+00 37/87 [===========>..................] - ETA: 14s - loss: 16.1365 - acc: 0.0000e+00 38/87 [============>.................] - ETA: 13s - loss: 16.1037 - acc: 0.0000e+00 39/87 [============>.................] - ETA: 13s - loss: 16.0052 - acc: 0.0000e+00 40/87 [============>.................] - ETA: 12s - loss: 15.9292 - acc: 0.0000e+00 41/87 [=============>................] - ETA: 12s - loss: 16.0010 - acc: 0.0000e+00 42/87 [=============>................] - ETA: 12s - loss: 15.9663 - acc: 0.0000e+00 43/87 [=============>................] - ETA: 11s - loss: 16.0228 - acc: 0.0000e+00 44/87 [==============>...............] - ETA: 11s - loss: 15.8755 - acc: 0.0000e+00 45/87 [==============>...............] - ETA: 10s - loss: 15.7836 - acc: 0.0000e+00 46/87 [==============>...............] - ETA: 10s - loss: 15.6972 - acc: 0.0000e+00 47/87 [===============>..............] - ETA: 10s - loss: 15.5532 - acc: 0.0000e+00 48/87 [===============>..............] - ETA: 9s - loss: 15.5204 - acc: 0.0000e+00 49/87 [===============>..............] - ETA: 9s - loss: 15.4718 - acc: 0.0000e+00 50/87 [================>.............] - ETA: 9s - loss: 15.3952 - acc: 0.0000e+00 51/87 [================>.............] - ETA: 8s - loss: 15.2977 - acc: 0.0000e+00 52/87 [================>.............] - ETA: 8s - loss: 15.1540 - acc: 0.0000e+00 53/87 [=================>............] - ETA: 8s - loss: 15.0993 - acc: 0.0000e+00 54/87 [=================>............] - ETA: 7s - loss: 15.0204 - acc: 0.0000e+00 55/87 [=================>............] - ETA: 7s - loss: 15.1036 - acc: 0.0000e+00 56/87 [==================>...........] - ETA: 7s - loss: 15.1317 - acc: 0.0000e+00 57/87 [==================>...........] - ETA: 7s - loss: 15.0634 - acc: 0.0000e+00 58/87 [===================>..........] - ETA: 6s - loss: 15.0228 - acc: 0.0000e+00 59/87 [===================>..........] - ETA: 6s - loss: 14.9588 - acc: 0.0000e+00 60/87 [===================>..........] - ETA: 6s - loss: 14.9471 - acc: 0.0000e+00 61/87 [====================>.........] - ETA: 5s - loss: 14.9063 - acc: 0.0000e+00 62/87 [====================>.........] - ETA: 5s - loss: 14.9460 - acc: 0.0000e+00 63/87 [====================>.........] - ETA: 5s - loss: 14.8353 - acc: 0.0000e+00 64/87 [=====================>........] - ETA: 5s - loss: 14.7554 - acc: 0.0000e+00 65/87 [=====================>........] - ETA: 4s - loss: 14.6927 - acc: 0.0000e+00 66/87 [=====================>........] - ETA: 4s - loss: 14.6676 - acc: 0.0000e+00 67/87 [======================>.......] - ETA: 4s - loss: 14.6354 - acc: 0.0000e+00 68/87 [======================>.......] - ETA: 4s - loss: 14.5597 - acc: 0.0000e+00 69/87 [======================>.......] - ETA: 3s - loss: 14.4895 - acc: 0.0000e+00 70/87 [=======================>......] - ETA: 3s - loss: 14.4133 - acc: 0.0000e+00 71/87 [=======================>......] - ETA: 3s - loss: 14.4057 - acc: 0.0000e+00 72/87 [=======================>......] - ETA: 3s - loss: 14.3700 - acc: 0.0000e+00 73/87 [========================>.....] - ETA: 3s - loss: 14.3082 - acc: 0.0000e+00 74/87 [========================>.....] - ETA: 2s - loss: 14.2159 - acc: 0.0000e+00 75/87 [========================>.....] - ETA: 2s - loss: 14.1612 - acc: 0.0000e+00 76/87 [=========================>....] - ETA: 2s - loss: 14.1065 - acc: 0.0000e+00 77/87 [=========================>....] - ETA: 2s - loss: 14.0383 - acc: 0.0000e+00 78/87 [=========================>....] - ETA: 1s - loss: 13.9342 - acc: 0.0000e+00 79/87 [==========================>...] - ETA: 1s - loss: 13.9503 - acc: 0.0000e+00 80/87 [==========================>...] - ETA: 1s - loss: 13.9030 - acc: 0.0000e+00 81/87 [==========================>...] - ETA: 1s - loss: 13.8041 - acc: 0.0000e+00 82/87 [===========================>..] - ETA: 1s - loss: 13.7370 - acc: 0.0000e+00 83/87 [===========================>..] - ETA: 0s - loss: 13.6369 - acc: 0.0000e+00 84/87 [===========================>..] - ETA: 0s - loss: 13.5511 - acc: 0.0000e+00 85/87 [============================>.] - ETA: 0s - loss: 13.4810 - acc: 0.0000e+00 86/87 [============================>.] - ETA: 0s - loss: 13.3974 - acc: 0.0000e+00 87/87 [==============================] - 20s 227ms/step - loss: 13.3463 - acc: 0.0000e+00 - val_loss: 13.8527 - val_acc: 0.0000e+00 Epoch 2/10
1/87 [..............................] - ETA: 13s - loss: 6.8483 - acc: 0.0000e+00
2/87 [..............................] - ETA: 13s - loss: 7.6435 - acc: 0.0000e+00
3/87 [>.............................] - ETA: 12s - loss: 8.7858 - acc: 0.0000e+00
4/87 [>.............................] - ETA: 12s - loss: 8.9345 - acc: 0.0000e+00
5/87 [>.............................] - ETA: 12s - loss: 9.7619 - acc: 0.0000e+00
6/87 [=>............................] - ETA: 12s - loss: 9.7888 - acc: 0.0000e+00
7/87 [=>............................] - ETA: 11s - loss: 9.6434 - acc: 0.0000e+00
8/87 [=>............................] - ETA: 11s - loss: 9.5977 - acc: 0.0000e+00
9/87 [==>...........................] - ETA: 11s - loss: 9.4199 - acc: 0.0000e+00
10/87 [==>...........................] - ETA: 11s - loss: 9.0438 - acc: 0.0000e+00
11/87 [==>...........................] - ETA: 11s - loss: 8.8961 - acc: 0.0000e+00
12/87 [===>..........................] - ETA: 11s - loss: 8.6482 - acc: 0.0000e+00
13/87 [===>..........................] - ETA: 11s - loss: 8.6559 - acc: 0.0096
14/87 [===>..........................] - ETA: 10s - loss: 8.2588 - acc: 0.0089
15/87 [====>.........................] - ETA: 10s - loss: 8.5910 - acc: 0.0083
16/87 [====>.........................] - ETA: 10s - loss: 8.4910 - acc: 0.0156
17/87 [====>.........................] - ETA: 10s - loss: 8.4993 - acc: 0.0147
18/87 [=====>........................] - ETA: 10s - loss: 8.5145 - acc: 0.0139
19/87 [=====>........................] - ETA: 10s - loss: 8.5446 - acc: 0.0132
20/87 [=====>........................] - ETA: 9s - loss: 8.4430 - acc: 0.0125
21/87 [======>.......................] - ETA: 9s - loss: 8.4307 - acc: 0.0119
22/87 [======>.......................] - ETA: 9s - loss: 8.2887 - acc: 0.0114
23/87 [======>.......................] - ETA: 9s - loss: 8.2329 - acc: 0.0109
24/87 [=======>......................] - ETA: 9s - loss: 8.3079 - acc: 0.0104
25/87 [=======>......................] - ETA: 9s - loss: 8.3876 - acc: 0.0100
26/87 [=======>......................] - ETA: 8s - loss: 8.3679 - acc: 0.0096
27/87 [========>.....................] - ETA: 8s - loss: 8.2373 - acc: 0.0093
28/87 [========>.....................] - ETA: 8s - loss: 8.2694 - acc: 0.0089
29/87 [=========>....................] - ETA: 8s - loss: 8.2608 - acc: 0.0086
30/87 [=========>....................] - ETA: 8s - loss: 8.2433 - acc: 0.0083
31/87 [=========>....................] - ETA: 8s - loss: 8.2817 - acc: 0.0081
32/87 [==========>...................] - ETA: 8s - loss: 8.3073 - acc: 0.0078
33/87 [==========>...................] - ETA: 7s - loss: 8.3707 - acc: 0.0076
34/87 [==========>...................] - ETA: 7s - loss: 8.3822 - acc: 0.0074
35/87 [===========>..................] - ETA: 7s - loss: 8.3820 - acc: 0.0071
36/87 [===========>..................] - ETA: 7s - loss: 8.5554 - acc: 0.0069
37/87 [===========>..................] - ETA: 7s - loss: 8.5137 - acc: 0.0068
38/87 [============>.................] - ETA: 7s - loss: 8.4466 - acc: 0.0066
39/87 [============>.................] - ETA: 7s - loss: 8.3349 - acc: 0.0096
40/87 [============>.................] - ETA: 6s - loss: 8.2341 - acc: 0.0094
41/87 [=============>................] - ETA: 6s - loss: 8.2771 - acc: 0.0091
42/87 [=============>................] - ETA: 6s - loss: 8.2464 - acc: 0.0089
43/87 [=============>................] - ETA: 6s - loss: 8.1427 - acc: 0.0087
44/87 [==============>...............] - ETA: 6s - loss: 8.0317 - acc: 0.0085
45/87 [==============>...............] - ETA: 6s - loss: 7.9877 - acc: 0.0111
46/87 [==============>...............] - ETA: 6s - loss: 7.9022 - acc: 0.0109
47/87 [===============>..............] - ETA: 5s - loss: 8.0111 - acc: 0.0106
48/87 [===============>..............] - ETA: 5s - loss: 8.0120 - acc: 0.0104
49/87 [===============>..............] - ETA: 5s - loss: 7.9244 - acc: 0.0102
50/87 [================>.............] - ETA: 5s - loss: 7.9096 - acc: 0.0100
51/87 [================>.............] - ETA: 5s - loss: 7.8483 - acc: 0.0123
52/87 [================>.............] - ETA: 5s - loss: 7.7958 - acc: 0.0144
53/87 [=================>............] - ETA: 4s - loss: 7.8319 - acc: 0.0142
54/87 [=================>............] - ETA: 4s - loss: 7.7792 - acc: 0.0185
55/87 [=================>............] - ETA: 4s - loss: 7.7461 - acc: 0.0182
56/87 [==================>...........] - ETA: 4s - loss: 7.7413 - acc: 0.0179
57/87 [==================>...........] - ETA: 4s - loss: 7.6873 - acc: 0.0175
58/87 [===================>..........] - ETA: 4s - loss: 7.6436 - acc: 0.0172
59/87 [===================>..........] - ETA: 4s - loss: 7.5916 - acc: 0.0169
60/87 [===================>..........] - ETA: 3s - loss: 7.5304 - acc: 0.0187
61/87 [====================>.........] - ETA: 3s - loss: 7.5422 - acc: 0.0184
62/87 [====================>.........] - ETA: 3s - loss: 7.5197 - acc: 0.0181
63/87 [====================>.........] - ETA: 3s - loss: 7.5267 - acc: 0.0179
64/87 [=====================>........] - ETA: 3s - loss: 7.4983 - acc: 0.0176
65/87 [=====================>........] - ETA: 3s - loss: 7.4640 - acc: 0.0173
66/87 [=====================>........] - ETA: 3s - loss: 7.4274 - acc: 0.0170
67/87 [======================>.......] - ETA: 2s - loss: 7.4695 - acc: 0.0168
68/87 [======================>.......] - ETA: 2s - loss: 7.5368 - acc: 0.0165
69/87 [======================>.......] - ETA: 2s - loss: 7.4981 - acc: 0.0163
70/87 [=======================>......] - ETA: 2s - loss: 7.4848 - acc: 0.0161
71/87 [=======================>......] - ETA: 2s - loss: 7.4809 - acc: 0.0158
72/87 [=======================>......] - ETA: 2s - loss: 7.5204 - acc: 0.0174
73/87 [========================>.....] - ETA: 2s - loss: 7.5331 - acc: 0.0171
74/87 [========================>.....] - ETA: 1s - loss: 7.5570 - acc: 0.0169
75/87 [========================>.....] - ETA: 1s - loss: 7.5066 - acc: 0.0167
76/87 [=========================>....] - ETA: 1s - loss: 7.4511 - acc: 0.0164
77/87 [=========================>....] - ETA: 1s - loss: 7.4028 - acc: 0.0162
78/87 [=========================>....] - ETA: 1s - loss: 7.3546 - acc: 0.0176
79/87 [==========================>...] - ETA: 1s - loss: 7.2821 - acc: 0.0174
80/87 [==========================>...] - ETA: 1s - loss: 7.3695 - acc: 0.0172
81/87 [==========================>...] - ETA: 0s - loss: 7.3234 - acc: 0.0185
82/87 [===========================>..] - ETA: 0s - loss: 7.2999 - acc: 0.0213
83/87 [===========================>..] - ETA: 0s - loss: 7.2791 - acc: 0.0226
84/87 [===========================>..] - ETA: 0s - loss: 7.2587 - acc: 0.0238
85/87 [============================>.] - ETA: 0s - loss: 7.2078 - acc: 0.0265
86/87 [============================>.] - ETA: 0s - loss: 7.1642 - acc: 0.0262
Process finished with exit code -1
提高lr是为了提高学习速度,而增大step是为了延长训练的时间,他们都是可以解决acc为0的情况的
@Programmerwyl 请问你的700数据集是包含整个字典的,还是只包含某些特定字符的小训练集。 如果是只有某些特定字符的小数据集进行训练也只能用来检测只包含特定字符的样本吧
我訓練字數1、2、3、10,目前是先生成100萬張圖片,我看他下降太慢,所以learning rate 一開始設0.05,step為十萬,目前是訓練五個epoch時間為1-2小時,字數越多訓練越久,目前訓練12小時,一個字正確率大概是0.62,十個字是0.12,所以差不多調到0.05其實不會太大,他還是會持續下降,我到第11個epoch正確率就開始不是零了,順帶一提,目前我是12小時後把 lr 在下調到0.005,不過或許你也可以在lr忽大忽小時再把lr調小
我訓練字數1、2、3、10,目前是先生成100萬張圖片,我看他下降太慢,所以learning rate 一開始設0.05,step為十萬,目前是訓練五個epoch時間為1-2小時,字數越多訓練越久,目前訓練12小時,一個字正確率大概是0.62,十個字是0.12,所以差不多調到0.05其實不會太大,他還是會持續下降,我到第11個epoch正確率就開始不是零了,順帶一提,目前我是12小時後把 lr 在下調到0.005,不過或許你也可以在lr忽大忽小時再把lr調小
请问你每次训练的各图片中的字符序列数是否一致?有没有试过训练集中各图片存在字符数目不一致的情况?