audiolm-pytorch icon indicating copy to clipboard operation
audiolm-pytorch copied to clipboard

Support our open source music pretrained Transformer

Open a43992899 opened this issue 1 year ago • 6 comments

Hi, we are researchers from the MAP (music audio pre-train) project. We pre-train transformer LMs on large-scale music audio datasets. See below. Our model, MERT, uses a similar method as HuBERT and has verified its performance on downstream music information retrieval tasks. It has been released on hugging face and can be used interchangeably with HuBERT loading code: model = HubertModel.from_pretrained("m-a-p/MERT-v0") We are currently working on training a better base model and scaling up to a large model with more music+speech data. Using our weights as an initialization will be a better start than using speech HuBERT. Better checkpoints will be released soon.

https://huggingface.co/m-a-p/MERT-v0

a43992899 avatar Jan 28 '23 19:01 a43992899