Renrui Zhang
Renrui Zhang
Same question. The paper said that the depth maps are transformed into disparity maps. Will this matter? @softmurata
@gordonhu608 @liyaowei-stu The quantity results of CLIP-Adapter have been updated as [here](https://github.com/gaopengcuhk/Tip-Adapter/blob/main/exp.log) as `CLIP-A`.
@June01 Thanks for this question. We follow the code for data pre-processing in [CoOp](https://github.com/KaiyangZhou/CoOp). Their reproduced zero-shot CLIP might have differences to the original CLIP paper.
Thanks for your interest. Here in line 111, we conduct inference on the test set after every epoch's training.
@waleedgondal Thanks for pointing out. Sorry for this mistake. The model selection after every-epoch training should be rectified by using validation set. By our experiments, this would not affect the...
Hi Thanks for your interest. The alpha and beta are both set to 1 as the tuning baseline. The alpha weighs the importance between CLIP-pre-trained and few-shot knowledge. If the...
Thanks for your interest! We will complete this in a few days.
Thanks for your interest on our work. We have conducted zero-shot part segmentation on ShapeNetPart dataset in [PointCLIP V2](https://arxiv.org/abs/2211.11682) by back-projecting CLIP's prediction from multi-view images. We will soon release...
Thanks for your interest. We only require one GPU for efficient few-shot training.
Thanks for your interests! Yes, queries will predict 3 categories according to model.num_classes, but the training is only by car samples and the car-dimension is selected as final outputs. You...