EfficientFormer icon indicating copy to clipboard operation
EfficientFormer copied to clipboard

Training log

Open LMMMEng opened this issue 3 years ago • 1 comments

Thanks for your excellent work!

Could you please provide the training log for reference?

LMMMEng avatar Jan 17 '23 03:01 LMMMEng

Hello, Here are the training script and log that I reproduced from the repository according to the paper and the author replied to the issue.

I run the code with 8 GPUs and each GPU has a batch size of 256 (2048 in total). I got 75.57% top1 accuracy on ImageNet 50000 validation images with training efficientformerv2_s0 for 300 epochs, which is only around 0.13% lower than the reported number (75.7%) in the paper. Hope this helps.

This is not official.

script from the repo

#!/usr/bin/env bash

MODEL=$1
nGPUs=$2

python -m torch.distributed.launch --nproc_per_node=$nGPUs --use_env main.py --model $MODEL \
--data-path /mnt/Data/ILSVRC2012 --batch-size 256 --num_workers 16 \

Log file: log.txt

hychiang-git avatar Sep 19 '23 22:09 hychiang-git