CosyVoice icon indicating copy to clipboard operation
CosyVoice copied to clipboard

inference_instruct2 自然语言推理传入 zero_shot_spk_id,会使生成的语音混乱

Open Glenming opened this issue 6 months ago • 1 comments

原因描述

当使用 inference_instruct2 自然语言推理时, 会将 instruct_text + '<|endofprompt|>' 作为 frontend_zero_shot 函数的 prompt_text, 当调用 cosyvoice.inference_instruct2 并传入 zero_shot_spk_id,按照当前处理逻辑,将不会处理传入的 prompt_text,导致此时合成的语音中将混入 zero_shot_spk_id 注册时的 prompt_text

def frontend_instruct2(self, tts_text, instruct_text, prompt_speech_16k, resample_rate, zero_shot_spk_id):
    model_input = self.frontend_zero_shot(tts_text, instruct_text + '<|endofprompt|>', prompt_speech_16k, resample_rate, zero_shot_spk_id)
    # del model_input['llm_prompt_speech_token']
    # del model_input['llm_prompt_speech_token_len']

    model_input.pop('llm_prompt_speech_token', None)
    model_input.pop('llm_prompt_speech_token_len', None)
    return model_input

frontend_zero_shot 函数认为可以调整为:

def frontend_zero_shot(self, tts_text, prompt_text, prompt_speech_16k, resample_rate, zero_shot_spk_id):
        tts_text_token, tts_text_token_len = self._extract_text_token(tts_text)
        prompt_text_token, prompt_text_token_len = self._extract_text_token(prompt_text)
        if zero_shot_spk_id == '':
            prompt_speech_resample = torchaudio.transforms.Resample(orig_freq=16000, new_freq=resample_rate)(prompt_speech_16k)
            speech_feat, speech_feat_len = self._extract_speech_feat(prompt_speech_resample)
            speech_token, speech_token_len = self._extract_speech_token(prompt_speech_16k)
            if resample_rate == 24000:
                # cosyvoice2, force speech_feat % speech_token = 2
                token_len = min(int(speech_feat.shape[1] / 2), speech_token.shape[1])
                speech_feat, speech_feat_len[:] = speech_feat[:, :2 * token_len], 2 * token_len
                speech_token, speech_token_len[:] = speech_token[:, :token_len], token_len
            embedding = self._extract_spk_embedding(prompt_speech_16k)
            model_input = {'prompt_text': prompt_text_token, 'prompt_text_len': prompt_text_token_len,
                           'llm_prompt_speech_token': speech_token, 'llm_prompt_speech_token_len': speech_token_len,
                           'flow_prompt_speech_token': speech_token, 'flow_prompt_speech_token_len': speech_token_len,
                           'prompt_speech_feat': speech_feat, 'prompt_speech_feat_len': speech_feat_len,
                           'llm_embedding': embedding, 'flow_embedding': embedding}
        else:
            model_input = self.spk2info[zero_shot_spk_id]
            model_input['prompt_text'] = prompt_text_token
            model_input['prompt_text_len'] = prompt_text_token_len
        model_input['text'] = tts_text_token
        model_input['text_len'] = tts_text_token_len
        return model_input

Glenming avatar Jun 25 '25 10:06 Glenming

感谢你的提示,我现在终于解决了音色instruct模式混乱的问题,多文本生成下终于能控制好情绪了😥😥

Imxxoo avatar Nov 26 '25 06:11 Imxxoo