Justin Kai
Justin Kai
Very interesting work. I wonder when the code will be released?
Hi, I am facing some problem about the installation. I do following command git clone https://github.com/Shilin-LU/MACE.git conda create -n mace python=3.10 conda activate mace conda install pytorch==2.0.1 torchvision==0.15.2 pytorch-cuda=11.7 -c...
GIPHY Celebrity Detector Installation Guide, after install you may encounter 'APP_DATA_DIR' is none 'APP_RECOGNITION_WEIGHTS_FILE' is None error. All you have to do is to export the APP_DATA_DIR=celeb-detection-oss/examples/resources APP_RECOGNITION_WEIGHTS_FILE=face_recognition/best_model_states.pkl
Hi, I am facing some problem about the installation. I do following command git clone https://github.com/Shilin-LU/MACE.git conda create -n mace python=3.10 conda activate mace conda install pytorch==2.0.1 torchvision==0.15.2 pytorch-cuda=11.7 -c...
such an amazing work!!! I have a question about paper.  In img the interpolated embedding is using ratio by divide the EVmax but in the loss in section 3.3...
I debug for almost three day... and sadly the gradio_box still not install well the version you don't specify....... ................
### Question can't use the demo
### Describe the issue Issue: pip install different version with your version which mention in [issue#24](https://github.com/WisconsinAIVision/ViP-LLaVA/issues/24#issuecomment-2256470278) Command: ``` git clone https://github.com/WisconsinAIVision/ViP-LLaVA.git conda create -n vip-llava python=3.10 -y conda activate vip-llava...
### Describe the issue  after install I run the in the terminal occur this problem and I fix by downgrade the gradio but got another error....  `(vip-llava) kai@user:~/project/ViP-LLaVA$...
I really want to try your model, can you provide demo or any inference code guideline for us to use your code?