Karndeep Singh

Results 21 comments of Karndeep Singh

> > Can CLIP can help to generate embedding for products and club into one fixed length vector that represents the product embedding? > > the answer is yes. CLIP...

> > how I can evaluate it on my already trained CLIP model? > > You can evaluate your model with the downstream task ., e.g., classification, retrieval, based on...

Hello @MokhtarOuardi @Amrou7 Are you both able to get the inference for the new images?

> 我已经找到问题的关键所在了!!! 1.需要在生成时候将 output_tensor = F.log_softmax(output_tensor[0], dim=1) 修改为 output_tensor = F.softmax(output_tensor[0], dim=1) 。 2.生成出来的output_tensor尺寸是1/4/H/W,其中的output_tensor[0,0,:,:]中的白色区域(概率大的数据)位置表示的是背景,而我最一开始时候的初始化均为'-float(inf)',这将导致最后融合的时候出现问题 细节的处理代码如下: > > ``` > def ClothSegMultiGen(self,img_cv,size=-1,): > > img = Image.fromarray(cv2.cvtColor(img_cv, cv2.COLOR_BGR2RGB)) > w,h =...

> what limit you from doing so? you can simply calculate it using scikit learn, just put it inside the training loop. but i wonder what you mean by precision...

Hi, Thanks for the answers @PsVenom . I would like to ask few more question that are as: 1. I have images of the product and its attributes like gender,...

> Hi! I've develop a parser to transform from brat standoff to SpERT format, it loses some data due to the complexity of brat standoff and the simplicity of the...

> Hi, please execute `bash ./scripts/fetch_datasets.sh` to download the preprocessed datasets. The datasets are then placed under `data/datasets`. You should follow the format used in the preprocessed datasets. Each sample...

> Hi, please execute `bash ./scripts/fetch_datasets.sh` to download the preprocessed datasets. The datasets are then placed under `data/datasets`. You should follow the format used in the preprocessed datasets. Each sample...

> @karndeepsingh did you find any way to tackle this problem? I'm also stuck in the same issue about how to join these results. Nope! Still figuring it out. If...