Results 9 issues of Jimes

if I have lots of scanned 3d face datas using 3d camera , how to train the pr net ? what is the pipeline and how to build the new...

When we have trained the quantization model, how to deploy it in cpu backend?

question

When we have trained the quantization model, how to deploy it?

torchlsq/csrc/ops/lsq.cpp:137:26: error: expected constructor, destructor, or type conversion before ‘(’ token SO_TORCH_LIBRARY_FRAGMENT(torchlsq, m)

I can't find the train data files of "BLIVA/bliva/data/llava/bliva_llava_150k.json" and "BLIVA/bliva/data/ocrVQA/cleaned_train_dataset.json". Can you tell me how to download them? Thanks!

In your paper, from the table of “Results of adding individual techniques of our framework in text-rich VQA benchmarks“,I get the improvement vals of +7.40% ,+31.72%,etc. I want to know...

Can you provide the run scripts of the instructBlip for us to train and evaluate?

Can you provide BoxSup-C2_Lbase_CLIP_R5021k_640b64_4x_mlr.pth,Detic_LbaseI_CLIP_R5021k_640b64_4x_ft4x_max-size_mlr.pth and Detic_OVCOCO_CLIP_R50_1x_max-size_caption_mlr.pth for us to evaluate directly?

I want to know where you get the file resource/coco_there_is_a_cls_vitb32.pt . And if you create it by yourself , please tell me how to do that. Thanks a lot!