Macaw-LLM icon indicating copy to clipboard operation
Macaw-LLM copied to clipboard

Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration

Results 13 Macaw-LLM issues
Sort by recently updated
recently updated
newest added

Dear Author, I would like to express my sincere gratitude for your open-source contributions. Your neural network model has left a deep impression on me. It seems that your model...

Thank you for your great work! Could you share the code implementations to generate the instruct data? Looking forward to your reply! Thank you

Hi, the README mentions several different LLM backbones, but the paper seems to reference only LLaMA and a brief code search didn't turn up any mentions of Vicuna or Bloom....

Remove duplicated `clip_config.projection_dim`. This can be a potential bug if the code expects to use another variable.

Multiple requirement versions are not specified. This is leading to problems during install. ``` protobuf scikit-learn moviepy ffmpeg-python tqdm pandas opencv-python clip openai-whisper appdirs loralib bitsandbytes black black[jupyter] fire gradio...

preprocess_data_unsupervised.py", line 105, in preprocess_alpaca_to_tensor_dataset texts = PROMPT_DICT['prompt_input'].format(e['instruction'], e['input']) if e['input'] != "" else PROMPT_DICT['prompt_no_input'].format(e['instruction'])

Hi Do you have any example of Macaw-LLM deployment using docker?

Hi, dear authors: Thanks for sharing the great work. I noticed that you have upload the training and evaluation code, but without demo code such as VQA. It would be...

Thank you very much for your outstanding work. I encountered the following problem when loading model weights. When I used torch.load to load pytorch_model.bin, I found that this part of...

I used the following code to get the pretrained models: from transformers import CLIPModel, LlamaModel clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch16") from transformers import WhisperForConditionalGeneration whisper_model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base") llama7b_model = LlamaModel.from_pretrained("decapoda-research/llama-7b-hf") clip_model.save_pretrained('trained_models/clip_model/') whisper_model.save_pretrained('trained_models/whisper_model/')...