Jiaming Han
Jiaming Han
Hi @manushree635 , please check this page: https://zenodo.org/records/4783391
The converted annotation file is at: https://huggingface.co/datasets/csuhan/OneLLM_Eval/blob/main/audio/clothov2/eval_clothocap_ann.json Sorry for the confusion about LTU. We will update the table soon.
For inference you can refer to our demo code:https://github.com/csuhan/OneLLM/blob/66cecedaee92e7e2b6d1afc390c466b58e7a7e58/demos/multi_turn_mm.py#L43-L144 For training and data, we plan to release in the near 1-2 months.
Hi @yan9qu , we have just released the training code. Feel free to tell us if you need any help.
numerical data. Please check our appendix for details
We will release the training code within one month.
Hi @Yanllan , we have just released the training code. Feel free to tell us if you need any help.
We have implement LoRA tuning for pure LLaMA at: https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/model/LLM/llama_peft.py You can 1. add Lora layers to [onellm.py](https://github.com/csuhan/OneLLM/blob/main/model/LLM/onellm.py), and 2. freeze LLM and turn on lora layers in its `__init__`...
@eugenelet Thanks for pointing out that! The current code is modified from the inference code, so we MISSED some functions such as parameter freezing. We will fix it soon. For...
Hi @fredfyyang ,we use the pre-processed dataset at: https://huggingface.co/datasets/pscotti/naturalscenesdataset Each fmri data is converted into a 1-D tensor such as a tensor with shape [1, 1536]. For details, you may...