ZuoXiang
ZuoXiang
@mayidu keras框架本来就会比caffe或者darknet要慢呀
I get the same error @shenglih
I build a model to load the pretrain model's weight as this: model = NASNetLarge((img_rows, img_cols, img_channels), use_auxiliary_branch=True, include_top=True) but i get this error: Traceback (most recent call last): File...
here is my all variables: ``` weights_file = 'NASNet-CIFAR-10.h5' lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.5), cooldown=0, patience=5, min_lr=0.5e-5) csv_logger = CSVLogger('NASNet-objction-classfication.csv') model_checkpoint = ModelCheckpoint(weights_file, monitor='val_predictions_acc', save_best_only=True, save_weights_only=True, mode='max') batch_size = 128 nb_classes =...
when i set `use_auxiliary_branch=False, include_top=False` and add code in my script. The model can be trained successfully. But another problem is when i can only set batch size to 16,otherwise...
Yes, I am using the generator function in imagenet_validation.py.
hello @titu1994 , How big is your model's batch size when you trained large nasnet?
@chief7 I used a different data set too, and i find the low GPU usage. Do you solve this problem?
I think the quality of your dataset is very important. And you can use shengmu and yunmu to be the symbols.It can make model converge more quickly.
@willswong11 Yes, the quality of data that used to train pre-trained model is not very good.