SparseInst icon indicating copy to clipboard operation
SparseInst copied to clipboard

training details

Open lsm140 opened this issue 1 year ago • 2 comments

Paper says SparseInst is built on Detectron2 [42] and trained over 8 GPUs with a total of 64 images per mini-batch. Q1:It use fp16 or not?

Q2: Small or large backbone use the same batchsize per GPU?

Q3: test on the 2080Ti. But the train unclear.Is it on RTX 3090?

lsm140 avatar Aug 10 '23 06:08 lsm140

Hi,thanks for your interest in SparseInst and we'd like to clarify your questions as follows:

Q1: SparseInst supports both FP32 and FP16. The results and performance reported in our paper are based on FP32.

Q2: All models including larger backbones are trained with the same batch size and schedule.

Q3: We only adopt 2080Ti for all the experiments in our paper. 3090 is only reported for the experiments later.

wondervictor avatar Aug 13 '23 06:08 wondervictor

Dear Author, When I am training my own clothing dataset with SparseInst recently, the mask edges often have wavy lines (the training set labeling does not have this). May I ask what may causes this? In order to find out its cause, I used coco pre-training model to inference on my own data and also found wavy lines also exist. Is this phenomenon inherent to the algorithm?

lsm140 avatar Aug 24 '23 08:08 lsm140