AudioLDM-training-finetuning icon indicating copy to clipboard operation
AudioLDM-training-finetuning copied to clipboard

Inference code

Open anhquannguyen21 opened this issue 1 year ago • 3 comments

Thanks for your great work. How to inference with new sample text caption after training?

anhquannguyen21 avatar Nov 14 '23 16:11 anhquannguyen21

The project is in active building. I'll add that in later.

haoheliu avatar Nov 14 '23 16:11 haoheliu

The inference code is ready now. Please checkout the main branch

haoheliu avatar Nov 17 '23 16:11 haoheliu

@haoheliu I got this error. How to fix it ? python3 audioldm_train/infer.py --config_yaml audioldm_train/config/2023_08_23_reproduce_audioldm/audioldm_original_medium.yaml --list_inference tests/captionlist/inference_test.lst error: /home/datnt114/Videos/AudioLDM-training-finetuning/audioldm_train/infer.py:125: SyntaxWarning: "is not" with a literal. Did you mean "!="? if "reload_from_ckpt" is not None: SEED EVERYTHING TO 0 Global seed set to 0 Add-ons: [] Dataset initialize finished Reload ckpt specified in the config file audioldm_train/config/2023_08_23_reproduce_audioldm/audioldm_original_medium.yaml LatentDiffusion: Running in eps-prediction mode /home/datnt114/anaconda3/lib/python3.11/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Some weights of RobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Traceback (most recent call last): File "/home/datnt114/Videos/AudioLDM-training-finetuning/audioldm_train/infer.py", line 128, in infer(dataset_json, config_yaml, config_yaml_path, exp_group_name, exp_name) File "/home/datnt114/Videos/AudioLDM-training-finetuning/audioldm_train/infer.py", line 67, in infer latent_diffusion = instantiate_from_config(configs["model"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/datnt114/Videos/AudioLDM-training-finetuning/audioldm_train/utilities/model_util.py", line 102, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/datnt114/Videos/AudioLDM-training-finetuning/audioldm_train/modules/latent_diffusion/ddpm.py", line 1014, in init super().init(conditioning_key=conditioning_key, *args, **kwargs) File "/home/datnt114/Videos/AudioLDM-training-finetuning/audioldm_train/modules/latent_diffusion/ddpm.py", line 112, in init self.clap = CLAPAudioEmbeddingClassifierFreev2( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/datnt114/Videos/AudioLDM-training-finetuning/audioldm_train/conditional_models.py", line 1172, in init self.model, self.model_cfg = create_model( ^^^^^^^^^^^^^ File "/home/datnt114/Videos/AudioLDM-training-finetuning/audioldm_train/modules/clap/open_clip/factory.py", line 153, in create_model model.load_state_dict(ckpt) File "/home/datnt114/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for CLAP: Unexpected key(s) in state_dict: "text_branch.embeddings.pos

manhdoan291 avatar Nov 18 '23 10:11 manhdoan291