Results 185 comments of Jiaming Han

Thanks for your suggestion! We will add the inference demo soon~

Thanks for your suggestion. We use the [LLaMA2 LICENSE](https://github.com/csuhan/OneLLM/blob/main/LICENSE_llama2)

The codebase and model related to LLaMA2 are under LLaMA2's license, while the rest part does not have a clear license.

I think OneLLM can work with Mistral. But now we only have LLaMA-pretrained models.

Thanks for your suggestion. Supporting both English and Chinese using Qwen or ChatGLM will be our future plan.

It is a internal package for reading data from data server, which is not a must for this repo.

Hi @vakadanaveen @imartinf @weiqingxin913 @GitJacobFrye This bug is caused by an update of open_clip_torch package. Please set open_clip_torch==2.23.0 in your env. Refer to an email from Dr Stephen Hausler: >...

Hi @lixinghe1999 , our model is mainly trained on natural sound like bird chirping, dog barking and train passing, so it is hard to distinguish human speech. Here are two...

It may also be related to the sampling length. We sample 1024 frames in total. https://github.com/csuhan/OneLLM/blob/913638c0d385ff706aaed945ec87ee42bab4debb/data/data_utils.py#L81-L86

1. 是的。模型只能找到和foreground相似的unknown 2. 正样本:known classes。负样本:background。在known classes + background之间计算contrastive loss会让不同类更远的。