Yusong Wu

Results 16 comments of Yusong Wu

Hi @phinate ! Are you planning to generate MIDI that is similar to your midi files or you are planning to generate audio renderings of MIDI files similar to your...

Hi! Could you please describe how you installed the package or what command you entered?

Hi Megan, I saw you opened a new issue for the new problem. I am closing this issue if you do not have further problems with training the model. :)

Huge thanks for your interest in our work! First, sorry that training on custom datasets is still hacky. For your question: 1. Yes, you can refer to https://github.com/magenta/midi-ddsp/issues/46 for how...

Hi! Thanks so much for your interest! Sorry for the late reply. 1. Only the keys used here (https://github.com/magenta/ddsp/blob/d1e9b555bf7ef6541d6c9a820b2e3941777c35c8/ddsp/training/data.py#L495) are useful. 2. Unfortunately I also do not have the dataset...

Hi! Thanks for your interest! Yes, the latter. MIDI-DDSP takes in a monophonic midi melody as input and the generated audio as output.

Yes. You need paired MIDI and Audio data to train MIDI-DDSP. MIDI-DDSP currently does not support training on dataset other than URMP, so you might need some hack to do...

I don't have a metric of the alignment quality, but the MIDI (note boundary) in the URMP dataset is manually labeled. So I manually checked the MIDI alignment with the...

Well... I gotta confess because this codebase is not well-written (by myself), so you will need some hacks. Here are some steps you should do: 1. Write data preprocess code...

Hi, if you are referring to the input to the Synthesis Generator and DDSP Inference, the input format is the one used in training data. Please see https://github.com/magenta/midi-ddsp/issues/52 for more...