tkDNN icon indicating copy to clipboard operation
tkDNN copied to clipboard

Yolo4tiny nBatches = 0 while running demo

Open Tacokeet opened this issue 4 years ago • 4 comments

After exporting weights using: ./darknet export yolo4tiny.cfg yolo4-tiny_best.weights layers

mini_batch = 8, batch = 64, time_steps = 1, train = 1
   layer   filters  size/strd(dil)      input                output
   0 conv     32       3 x 3/ 2   1280 x 736 x   3 ->  640 x 368 x  32 0.407 BF
   1 conv     64       3 x 3/ 2    640 x 368 x  32 ->  320 x 184 x  64 2.171 BF
   2 conv     64       3 x 3/ 1    320 x 184 x  64 ->  320 x 184 x  64 4.341 BF
   3 route  2                                  1/2 ->  320 x 184 x  32
   4 conv     32       3 x 3/ 1    320 x 184 x  32 ->  320 x 184 x  32 1.085 BF
   5 conv     32       3 x 3/ 1    320 x 184 x  32 ->  320 x 184 x  32 1.085 BF
   6 route  5 4                                    ->  320 x 184 x  64
   7 conv     64       1 x 1/ 1    320 x 184 x  64 ->  320 x 184 x  64 0.482 BF
   8 route  2 7                                    ->  320 x 184 x 128
   9 max                2x 2/ 2    320 x 184 x 128 ->  160 x  92 x 128 0.008 BF
  10 conv    128       3 x 3/ 1    160 x  92 x 128 ->  160 x  92 x 128 4.341 BF
  11 route  10                                 1/2 ->  160 x  92 x  64
  12 conv     64       3 x 3/ 1    160 x  92 x  64 ->  160 x  92 x  64 1.085 BF
  13 conv     64       3 x 3/ 1    160 x  92 x  64 ->  160 x  92 x  64 1.085 BF
  14 route  13 12                                  ->  160 x  92 x 128
  15 conv    128       1 x 1/ 1    160 x  92 x 128 ->  160 x  92 x 128 0.482 BF
  16 route  10 15                                  ->  160 x  92 x 256
  17 max                2x 2/ 2    160 x  92 x 256 ->   80 x  46 x 256 0.004 BF
  18 conv    256       3 x 3/ 1     80 x  46 x 256 ->   80 x  46 x 256 4.341 BF
  19 route  18                                 1/2 ->   80 x  46 x 128
  20 conv    128       3 x 3/ 1     80 x  46 x 128 ->   80 x  46 x 128 1.085 BF
  21 conv    128       3 x 3/ 1     80 x  46 x 128 ->   80 x  46 x 128 1.085 BF
  22 route  21 20                                  ->   80 x  46 x 256
  23 conv    256       1 x 1/ 1     80 x  46 x 256 ->   80 x  46 x 256 0.482 BF
  24 route  18 23                                  ->   80 x  46 x 512
  25 max                2x 2/ 2     80 x  46 x 512 ->   40 x  23 x 512 0.002 BF
  26 conv    512       3 x 3/ 1     40 x  23 x 512 ->   40 x  23 x 512 4.341 BF
  27 conv    256       1 x 1/ 1     40 x  23 x 512 ->   40 x  23 x 256 0.241 BF
  28 conv    512       3 x 3/ 1     40 x  23 x 256 ->   40 x  23 x 512 2.171 BF
  29 conv   1221       1 x 1/ 1     40 x  23 x 512 ->   40 x  23 x1221 1.150 BF
  30 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
  31 route  27                                     ->   40 x  23 x 256
  32 conv    128       1 x 1/ 1     40 x  23 x 256 ->   40 x  23 x 128 0.060 BF
  33 upsample                 2x    40 x  23 x 128 ->   80 x  46 x 128
  34 route  33 23                                  ->   80 x  46 x 384
  35 conv    256       3 x 3/ 1     80 x  46 x 384 ->   80 x  46 x 256 6.512 BF
  36 conv   1221       1 x 1/ 1     80 x  46 x 256 ->   80 x  46 x1221 2.301 BF
  37 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
Total BFLOPS 40.348
avg_outputs = 1922557
Loading weights from /home/tacokeet/yolov4-tiny_best.weights...
 seen 64, trained: 49865 K-images (779 Kilo-batches_64)
Done! Loaded 38 layers from weights-file
n: 0, type 0
Convolutional
weights: 864, biases: 32, batch_normalize: 1, groups: 1
write binary layers/c0.bin

n: 1, type 0
Convolutional
weights: 18432, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c1.bin

n: 2, type 0
Convolutional
weights: 36864, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c2.bin

n: 3, type 9
export ROUTE

n: 4, type 0
Convolutional
weights: 9216, biases: 32, batch_normalize: 1, groups: 1
write binary layers/c4.bin

n: 5, type 0
Convolutional
weights: 9216, biases: 32, batch_normalize: 1, groups: 1
write binary layers/c5.bin

n: 6, type 9
export ROUTE

n: 7, type 0
Convolutional
weights: 4096, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c7.bin

n: 8, type 9
export ROUTE

n: 9, type 3
export MAXPOOL

n: 10, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c10.bin

n: 11, type 9
export ROUTE

n: 12, type 0
Convolutional
weights: 36864, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c12.bin

n: 13, type 0
Convolutional
weights: 36864, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c13.bin

n: 14, type 9
export ROUTE

n: 15, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c15.bin

n: 16, type 9
export ROUTE

n: 17, type 3
export MAXPOOL

n: 18, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c18.bin

n: 19, type 9
export ROUTE

n: 20, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c20.bin

n: 21, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c21.bin

n: 22, type 9
export ROUTE

n: 23, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c23.bin

n: 24, type 9
export ROUTE

n: 25, type 3
export MAXPOOL

n: 26, type 0
Convolutional
weights: 2359296, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c26.bin

n: 27, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c27.bin

n: 28, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c28.bin

n: 29, type 0
Convolutional
weights: 625152, biases: 1221, batch_normalize: 0, groups: 1
write binary layers/c29.bin

n: 30, type 27
export YOLO
mask: 3
biases: 12
mask 3.000000
mask 4.000000
mask 5.000000
anchor 10.000000
anchor 14.000000
anchor 23.000000
anchor 27.000000
anchor 37.000000
anchor 58.000000
anchor 81.000000
anchor 82.000000
anchor 135.000000
anchor 169.000000
anchor 344.000000
anchor 319.000000
write binary layers/g30.bin

n: 31, type 9
export ROUTE

n: 32, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c32.bin

n: 33, type 32
export UPSAMPLE

n: 34, type 9
export ROUTE

n: 35, type 0
Convolutional
weights: 884736, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c35.bin

n: 36, type 0
Convolutional
weights: 312576, biases: 1221, batch_normalize: 0, groups: 1
write binary layers/c36.bin

n: 37, type 27
export YOLO
mask: 3
biases: 12
mask 1.000000
mask 2.000000
mask 3.000000
anchor 10.000000
anchor 14.000000
anchor 23.000000
anchor 27.000000
anchor 37.000000
anchor 58.000000
anchor 81.000000
anchor 82.000000
anchor 135.000000
anchor 169.000000
anchor 344.000000
anchor 319.000000
write binary layers/g37.bin


network input size: 2826240
Predicted in 5.118877 seconds.

networks output size: 4493280

I used ./test_yolo4tiny

Not supported field: batch=64
Not supported field: subdivisions=8
Not supported field: momentum=0.9
Not supported field: decay=0.0005
Not supported field: angle=0
Not supported field: saturation = 1.5
Not supported field: exposure = 1.5
Not supported field: hue=.1
Not supported field: learning_rate=0.00261
Not supported field: burn_in=1000
Not supported field: max_batches = 500200
Not supported field: policy=steps
Not supported field: steps=400000,450000
Not supported field: scales=.1,.1
New NETWORK (tkDNN v0.5, CUDNN v8)
Reading weights: I=3 O=32 KERNEL=3x3x1
Reading weights: I=32 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=32 O=32 KERNEL=3x3x1
Reading weights: I=32 O=32 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=1221 KERNEL=1x1x1
Not supported field: anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
Not supported field: jitter=.3
Not supported field: cls_normalizer=1.0
Not supported field: iou_normalizer=0.07
Not supported field: iou_loss=ciou
Not supported field: ignore_thresh = .7
Not supported field: truth_thresh = 1
Not supported field: random=0
Not supported field: nms_kind=greedynms
Not supported field: beta_nms=0.6
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=384 O=256 KERNEL=3x3x1
Reading weights: I=256 O=1221 KERNEL=1x1x1
Not supported field: anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
Not supported field: jitter=.3
Not supported field: cls_normalizer=1.0
Not supported field: iou_normalizer=0.07
Not supported field: iou_loss=ciou
Not supported field: ignore_thresh = .7
Not supported field: truth_thresh = 1
Not supported field: random=0
Not supported field: nms_kind=greedynms
Not supported field: beta_nms=0.6

====================== NETWORK MODEL ======================
N.  Layer type       input (H*W,CH)        output (H*W,CH)
  0 Conv2d           736 x 1280,    3  ->  368 x  640,   32
  1 ActivationLeaky  368 x  640,   32  ->  368 x  640,   32
  2 Conv2d           368 x  640,   32  ->  184 x  320,   64
  3 ActivationLeaky  184 x  320,   64  ->  184 x  320,   64
  4 Conv2d           184 x  320,   64  ->  184 x  320,   64
  5 ActivationLeaky  184 x  320,   64  ->  184 x  320,   64
  6 Route            184 x  320,   32  ->  184 x  320,   32
  7 Conv2d           184 x  320,   32  ->  184 x  320,   32
  8 ActivationLeaky  184 x  320,   32  ->  184 x  320,   32
  9 Conv2d           184 x  320,   32  ->  184 x  320,   32
 10 ActivationLeaky  184 x  320,   32  ->  184 x  320,   32
 11 Route            184 x  320,   64  ->  184 x  320,   64
 12 Conv2d           184 x  320,   64  ->  184 x  320,   64
 13 ActivationLeaky  184 x  320,   64  ->  184 x  320,   64
 14 Route            184 x  320,  128  ->  184 x  320,  128
 15 Pooling          184 x  320,  128  ->   92 x  160,  128
 16 Conv2d            92 x  160,  128  ->   92 x  160,  128
 17 ActivationLeaky   92 x  160,  128  ->   92 x  160,  128
 18 Route             92 x  160,   64  ->   92 x  160,   64
 19 Conv2d            92 x  160,   64  ->   92 x  160,   64
 20 ActivationLeaky   92 x  160,   64  ->   92 x  160,   64
 21 Conv2d            92 x  160,   64  ->   92 x  160,   64
 22 ActivationLeaky   92 x  160,   64  ->   92 x  160,   64
 23 Route             92 x  160,  128  ->   92 x  160,  128
 24 Conv2d            92 x  160,  128  ->   92 x  160,  128
 25 ActivationLeaky   92 x  160,  128  ->   92 x  160,  128
 26 Route             92 x  160,  256  ->   92 x  160,  256
 27 Pooling           92 x  160,  256  ->   46 x   80,  256
 28 Conv2d            46 x   80,  256  ->   46 x   80,  256
 29 ActivationLeaky   46 x   80,  256  ->   46 x   80,  256
 30 Route             46 x   80,  128  ->   46 x   80,  128
 31 Conv2d            46 x   80,  128  ->   46 x   80,  128
 32 ActivationLeaky   46 x   80,  128  ->   46 x   80,  128
 33 Conv2d            46 x   80,  128  ->   46 x   80,  128
 34 ActivationLeaky   46 x   80,  128  ->   46 x   80,  128
 35 Route             46 x   80,  256  ->   46 x   80,  256
 36 Conv2d            46 x   80,  256  ->   46 x   80,  256
 37 ActivationLeaky   46 x   80,  256  ->   46 x   80,  256
 38 Route             46 x   80,  512  ->   46 x   80,  512
 39 Pooling           46 x   80,  512  ->   23 x   40,  512
 40 Conv2d            23 x   40,  512  ->   23 x   40,  512
 41 ActivationLeaky   23 x   40,  512  ->   23 x   40,  512
 42 Conv2d            23 x   40,  512  ->   23 x   40,  256
 43 ActivationLeaky   23 x   40,  256  ->   23 x   40,  256
 44 Conv2d            23 x   40,  256  ->   23 x   40,  512
 45 ActivationLeaky   23 x   40,  512  ->   23 x   40,  512
 46 Conv2d            23 x   40,  512  ->   23 x   40, 1221
 47 Yolo              23 x   40, 1221  ->   23 x   40, 1221
 48 Route             23 x   40,  256  ->   23 x   40,  256
 49 Conv2d            23 x   40,  256  ->   23 x   40,  128
 50 ActivationLeaky   23 x   40,  128  ->   23 x   40,  128
 51 Upsample          23 x   40,  128  ->   46 x   80,  128
 52 Route             46 x   80,  384  ->   46 x   80,  384
 53 Conv2d            46 x   80,  384  ->   46 x   80,  256
 54 ActivationLeaky   46 x   80,  256  ->   46 x   80,  256
 55 Conv2d            46 x   80,  256  ->   46 x   80, 1221
 56 Yolo              46 x   80, 1221  ->   46 x   80, 1221
===========================================================

GPU free memory: 1296.98 mb.
New NetworkRT (TensorRT v7.1)
Float16 support: 1
Int8 support: 0
DLAs: 0
create execution context
Input/outputs numbers: 3
input index = 0 -> output index = 2
Data dim: 1 3 736 1280 1
Data dim: 1 1221 46 80 1
RtBuffer 0   dim: Data dim: 1 3 736 1280 1
RtBuffer 1   dim: Data dim: 1 1221 23 40 1
RtBuffer 2   dim: Data dim: 1 1221 46 80 1

====== CUDNN inference ======
Data dim: 1 3 736 1280 1
Data dim: 1 1221 46 80 1

===== TENSORRT inference ====
Data dim: 1 3 736 1280 1
Data dim: 1 1221 46 80 1

=== OUTPUT 0 CHECK RESULTS ==
CUDNN vs correct | OK ~0.02
TRT   vs correct | OK ~0.02
CUDNN vs TRT     | OK ~0.02

=== OUTPUT 1 CHECK RESULTS ==
CUDNN vs correct | OK ~0.02
TRT   vs correct | OK ~0.02
CUDNN vs TRT     | OK ~0.02

Then I wanted to test it with the demo and got this result ./demo yolo4tiny_fp32.rt test_video.mp4 y

detection
yolo4tiny_fp32.rt
New NetworkRT (TensorRT v7.1)
Float16 support: 1
Int8 support: 0
DLAs: 0
create execution context
Input/outputs numbers: 3
input index = 0 -> output index = 2
Data dim: 1 3 736 1280 1
Data dim: 1 1221 46 80 1
RtBuffer 0   dim: Data dim: 1 3 736 1280 1
RtBuffer 1   dim: Data dim: 1 1221 23 40 1
RtBuffer 2   dim: Data dim: 1 1221 46 80 1
camera started

A batch size greater than nBatches cannot be used
tkDNN/include/tkDNN/DetectionNN.h:104
Aborting...

And after some looking around I added a cout in the DetectionNN.h and it seems that nBatches = 0. I really could some help fixing this issue.

Tacokeet avatar Oct 05 '20 10:10 Tacokeet

Hi @Tacokeet seems weird to me. I have tired the same exact as you and everything worked fine. It's seems that n_batch is set to 0, but the deafult value is 1 in the code. Have you modified any part of the code? Even by mistake? Can you check with git status.

mive93 avatar Oct 09 '20 09:10 mive93

my git status:

git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

        modified:   tests/darknet/cfg/yolo4tiny.cfg
        modified:   tests/darknet/names/coco.names

no changes added to commit (use "git add" and/or "git commit -a")

my modified yolo4tiny.cfg:

[net]
# Testing
#batch=1
#subdivisions=1
# Training
batch=64
subdivisions=8
width=1280
height=736
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
flip=0

learning_rate=0.00261
burn_in=1000
max_batches = 804000
policy=steps
steps=643200,723600
scales=.1,.1

[convolutional]
batch_normalize=1
filters=32
size=3
stride=2
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[route]
layers=-1
groups=2
group_id=1

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[route]
layers = -1,-2

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -6,-1

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[route]
layers=-1
groups=2
group_id=1

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[route]
layers = -1,-2

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -6,-1

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[route]
layers=-1
groups=2
group_id=1

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[route]
layers = -1,-2

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -6,-1

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

##################################

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=1221
activation=linear



[yolo]
mask = 3,4,5
anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
classes=402
num=6
jitter=.3
scale_x_y = 1.05
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
ignore_thresh = .7
truth_thresh = 1
random=0
resize=1.5
nms_kind=greedynms
beta_nms=0.6

[route]
layers = -4

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = -1, 23

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=1221
activation=linear

[yolo]
mask = 1,2,3
anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
classes=402
num=6
jitter=.3
scale_x_y = 1.05
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
ignore_thresh = .7
truth_thresh = 1
random=0
resize=1.5
nms_kind=greedynms
beta_nms=0.6

my modified coco.names:

A01100_OB201PS
A01100S
A01120_OB201PS
A01120S
A01130_OB201PS
A01130S
A0150S
A0180S
A01_005
A01_015
A01_020
A01_030
A01_030ZB
A01_050
A01_050F
A01_060
A01_070
A01_080
A01_080F
A01_100
A01_120
A01_130
A02_030
A02_030ZE
A02_050
A04_030
A05_030
B01
B02
B03
B04
B05
B06
B06_OB401
B07
BB02L
BB02R
BB05
BB100L
BB100R
BB12L
BB12R
BE01
BE01A
BE02A
BE04A
BE04A2
BE04C1
BE04C2
BE04C3
BE04D1
BE04D2
BE04E
BE04F
BE04FA
BE04G
BE04H
BE04I
BE06
BE06A
BE07
BE08
BE08A
BT01
BT01A
BT03
BT04
BT05
BT05A
BT07
BT08
BT10
BT11
BT12
BT13
BT15L1
BT15L2
BT15R1
BT15R2
BT16A_EN
BT16A_NL
BT16B_EN
BT16B_NL
BT17
BT19
BT19A
BT21
BT21A
BT23
BT24
BT25
BT25A
BT27
BT29
BT31
BT32
BW101
BW101S104
BW101SP03
BW101SP18
BW111
BW111P
BW112
BW112P
BW201
BW201B
BW201R
BW202
BW203L
BW204A
BW204B
BW204E
BW205A
BW205B
BW205E
BW210
BW210A
BW501
BW501B
BW501L
BW501LH
BW501R
C01
C01A
C01F1
C01F2
C02
C02_OB54F
C02_OB705F
C03
C03_OB54F
C04L
C04R
C05
C06
C07
C07_OB
C07A
C07B
C08
C09
C10
C11
C12
C13
C14
C15
C15_OB
C16
C17
C18
C19
C20
C21
C22
C22A
C22B
D01
D01F
D02FL
D02FR
D02L
D02R
D03
D04
D04_OB11
D05L
D05R
D06L
D06R
D07
D102L
D102R
D103L
D103R
E01
E01_OB
E01ZB
E01ZBE
E02
E03
E03_OB
E04
E05
E06
E07
E07B
E07C
E08AH
E08
E08_OB
E08A
E08B
E08C
E08C_OB105
E08D
E08E
E08G
E08H
E08J
E08K
E08L
E08M
E08N
E08N_OB201P
E08O
E08R2
E08R1
E09
E09_OB304
E09_OB308
E10
E105
E11ZE
E12K
E12P
E13
E201
E201ZB
F01
F02
F03
F03_OB206P2S
F03_OB206PS
F03_OB411
F04
F05
F06
F07
F08
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
G01
G02
G03
G04
G05
G06
G07
G07ZB
G07ZE
G08
G09
G10
G11
G11F_OB505
G12
G12A
G12B
G13
G14
H01
H01A1
H01A2
H01A3
H01A4
H01A5
H01A6
H02
H02A1
H02A2
H02A3
H02A4
H02A5
J01
J02
J03
J04
J05
J07D
J07S
J08
J09
J10
J11
J14
J15
J16
J17
J18
J19
J20
J20_OB612F
J21
J22
J23
J24
J25
J26
J27
J28
J29
J30
J31
J32
J33
J34
J35
J36
J37
J38
J39
L01
L02
L02F
L03A
L03B
L03C
L08
L08_OB054
L09L1
L09L2
L09L3
L09L4
L09ML
L09MR
L09R1
L09R2
L09R3
L09R4
L14
L20
L205
L207A
L21
L213
L301A
L301B
L304A
L304B
L305A
L305B
L306A
L306B
L307A
L307B
L403
LL51E
L51
L52E
L52
L54B
OB01
OB02
OB03
OB04
OB05
OB06
OB07
OB08
OB09
OB10
OB11
OB12
OB13
OB14
OB15
OB17R
OB17L
OB18R
OB18L
OB19
OB25
OB256P
OB301
OB303
OB304A
OB304C
OB304D
OB501L
OB501R
OB502
OB503
OB503_OB01
OB503_OB02
OB503_OB04
OB504
OB505
OB711L
OB711R
OB712L
OB712R
OB713L
OB713R
VHB21B
VHB21R
VHB22
VR09_01
VR09_02
VR09_03
VR09_04

Tacokeet avatar Oct 14 '20 14:10 Tacokeet

If you haven't solved that yet, have you tried setting the batch size from command line?

mive93 avatar Jan 04 '21 14:01 mive93

@Tacokeet @mive93 : I am also facing the same problem. Did you fix it?

Sudhakar17 avatar Mar 17 '21 10:03 Sudhakar17