cornettoyu

Results 8 comments of cornettoyu

Hi, You can refer to the [training log here](https://github.com/bytedance/fc-clip/issues/6#issuecomment-1789601736) We kept the best checkpoint in terms of ADE20K PQ metric, as we also note that the last checkpoint usually tends...

Hi, you can check our provided demo script to insert user classes like [here](https://github.com/bytedance/fc-clip/blob/main/demo/predictor.py#L124)

> Hi, thanks for your great work. Could you provide the training log on the COCO dataset please? I'd like to compare my reproduction results to find out what went...

> Hi, I'm also reproducing the results. Can you show your training logs here? Here is mine: ![image](https://user-images.githubusercontent.com/77483744/274715496-e52e0aee-66e6-483d-bb2b-c11396921d5a.png) I use two GPUs with batch size=4 for training, and I'm unsure...

> Cannot reproduce the results of convnext on ADE20k. Has anybody successfully reproduce the result? Can you provide your results here so we can look into what is wrong? We...

> > > Cannot reproduce the results of convnext on ADE20k. Has anybody successfully reproduce the result? > > > > > > Can you provide your results here so...

Please see the attached [log file](https://github.com/bytedance/fc-clip/files/13231285/metrics.json) for a reference of training/validation log FC-CLIP with ConvNeXt-L. The provided checkpoint is at step 309999, with "panoptic_seg/PQ": 26.788164947280208 Furthermore, we ran the experiments...

Hi, The missing parameters are in the frozen CNN CLIP part, which should be loaded from the pre-trained weights from OpenCLIP. That's why the provided checkpoint does not contain them...