pixi-live2d-display
pixi-live2d-display copied to clipboard
Live2D with Lipsync (using audio file/link)
Solving issues mentioned in https://github.com/guansss/pixi-live2d-display/pull/117
@guansss After long time (sorry, was busy with exams) finally finished it... Fixed all the issues you mentioned (except the volume and the ones you mentioned are unused and resolveURL thing. Because, without them, this seems unresponsive)
Kindly spare some of your time to check it and let me know if any more changes are required.
No worries about the time since I'm being late most of the times 😅
Thanks for your effort but it looks like the data URL check and cache buster are left unchanged due to a commit that reverted the changes? https://github.com/guansss/pixi-live2d-display/pull/122/commits/9fc5a21fa7c67d61df481fb9fbdd6bb003fe5cc9
Others look good to me, I'll merge it when these two changes are made.
No worries about the time since I'm being late most of the times 😅
Thanks for your effort but it looks like the data URL check and cache buster are left unchanged due to a commit that reverted the changes? 9fc5a21
Others look good to me, I'll merge it when these two changes are made.
Maybe something weird happened (branch issue i guess). Thanks for noticing. I'll update the code asap.
Is this PR ready to merge?
Is this PR ready to merge?
It was ready, but it shows some errors on npm test run (but didn't see any issue while running as script) testing (sorry for taking time)
@guansss Yep its done now, thanks for your patience
I used this command to install package:
yarn add https://github.com/RaSan147/pixi-live2d-display.git#for_PR
But it raised this error:
[vite] Internal server error: Failed to resolve entry for package "pixi-live2d-display". The package may have incorrect main/module/exports specified in its package.json. Plugin: vite:import-analysis
I used this command to install package:
yarn add https://github.com/RaSan147/pixi-live2d-display.git#for_PR
But it raised this error:
[vite] Internal server error: Failed to resolve entry for package "pixi-live2d-display". The package may have incorrect main/module/exports specified in its package.json. Plugin: vite:import-analysis
unfortunately i have no idea what this is and what it should respond, it works as it should be (sorry me noob at node/TS/yarn stuffs) lets wait till the site admin says something
Sorry for the delay! There are still some changes I would like to see, but I'm going to do it myself to speed things up since this PR has been waiting too long. Thanks for all your hard work!
@954-Ivory I believe the repo itself is not a ready-to-use package so it cannot be installed like this. Before this PR is merged, you may need to install it via CDN URLs.
A new behavior is breaking some existing tests, that is, when a motion with a sound is playing, the model does not allow another motion to start even if it has a higher priority, because there's already a playing audio.
I tried removing that audio check but just got some other errors. Anyway, I think this is mainly because we didn't have a thorough design for how to reconcile motions with sounds (model.motion()
) and lipsync audios (model.speak()
).
So here we go! My intuitive idea is, motions shouldn't be disallowed to play because of a playing audio, and motion sounds should have a higher priority than lipsync audios. So if a motion is going to play and it has a sound, the current lipsync audio should be canceled; and if it doesn't have a sound, the lipsync audio should keep playing along with the motion.
Could you share your thoughts on this?
A new behavior is breaking some existing tests, that is, when a motion with a sound is playing, the model does not allow another motion to start even if it has a higher priority, because there's already a playing audio.
I tried removing that audio check but just got some other errors. Anyway, I think this is mainly because we didn't have a thorough design for how to reconcile motions with sounds (
model.motion()
) and lipsync audios (model.speak()
).So here we go! My intuitive idea is, motions shouldn't be disallowed to play because of a playing audio, and motion sounds should have a higher priority than lipsync audios. So if a motion is going to play and it has a sound, the current lipsync audio should be canceled; and if it doesn't have a sound, the lipsync audio should keep playing along with the motion.
Could you share your thoughts on this?
Well this is a double edge sword. sometimes you need to play audio (unskippable voiceline, moves based on touch, still talks) and again ends the audio and goes to next motion state (skipping a part of game) we should more focus on how priority works so that the coders can specify the behavior themselves.
Yeah it's important to think of the flexibility. Yet priority may not work well on this because the current motion state management is already very complicated, and adding another priority factor will make it a real mess. It will also heavily break the APIs.
How about separating the state into two layers? Like, one for lipsync audios and one for motion sounds, each state controls an <audio>
element. They can be independent and are allowed to play simultaneously, so it's up to the developer to decide which one to play, or both.
Yeah, lets give them options (and specify in the docs) and let them pick
A new behavior is breaking some existing tests, that is, when a motion with a sound is playing, the model does not allow another motion to start even if it has a higher priority, because there's already a playing audio.
I tried removing that audio check but just got some other errors. Anyway, I think this is mainly because we didn't have a thorough design for how to reconcile motions with sounds (
model.motion()
) and lipsync audios (model.speak()
).So here we go! My intuitive idea is, motions shouldn't be disallowed to play because of a playing audio, and motion sounds should have a higher priority than lipsync audios. So if a motion is going to play and it has a sound, the current lipsync audio should be canceled; and if it doesn't have a sound, the lipsync audio should keep playing along with the motion.
Could you share your thoughts on this?
Why can't both speaking audio and motion sound be played together?
@xumx Yeah that's exactly what my latest comment was talking about.
Hey @RaSan147 , is there a reason why these two calculations are different? I wonder if they can be consistent so I can move them into InternalModel
.
https://github.com/guansss/pixi-live2d-display/blob/b00b64b001fa0e9c64768facf0426fc30b827a3a/src/cubism2/Cubism2InternalModel.ts#L250-L266
https://github.com/guansss/pixi-live2d-display/blob/b00b64b001fa0e9c64768facf0426fc30b827a3a/src/cubism4/Cubism4InternalModel.ts#L225-L236
Hey @RaSan147 , is there a reason why these two calculations are different? I wonder if they can be consistent so I can move them into
InternalModel
.https://github.com/guansss/pixi-live2d-display/blob/b00b64b001fa0e9c64768facf0426fc30b827a3a/src/cubism2/Cubism2InternalModel.ts#L250-L266
https://github.com/guansss/pixi-live2d-display/blob/b00b64b001fa0e9c64768facf0426fc30b827a3a/src/cubism4/Cubism4InternalModel.ts#L225-L236
Sorry, i might forgot to update the another one. Would recommend adding bias_power and weighted one (otherwise lips don't move well).
Finally it's ready to merge! Before I merge it, are there any changes you would like to make or suggest?
Sorry didn't notice, gimme a bit time, testing...
btw can you please check the PR i've sent you on cubism folder repo... that should fix the process not found (or you may tweak the results a bit)
Well, I'm gonna miss motion(...., {sound}) it was a great option (since its optional feature, removing it is kinda feeling like a bad idea) that would help retain certain posture and motion while speaking. Also the expression and many more things are missing from the PR_version... 😥
期待,更新
Gotta re-test and look for compatible way to shift from patch to official version
#150
live2d official website The demonstration video of the model can flexibly display mouth movements, and the lip-syncing looks quite natural. Demo Video
In this example model, not only can the mouth opening be set based on audio information, but vowel mouth shapes can also be set by adjusting 'ParamA', 'ParamE', 'ParamI', 'ParamO', 'ParamU'.
model.internalModel.coreModel.setParameterValueById('ParamMouthOpenY', mouthY)
model.internalModel.coreModel.setParameterValueById('ParamA', 0.3)
I feel there might be better methods to achieve lip-syncing. Can the model be set to correspond to the mouth shape based on the audio?
Also, Alibaba Cloud's TTS can output the time position of each Chinese character/English word in the audio. How can the model play the audio, and can it set the corresponding mouth shape based on the phonetic information?
Seeking guidance from the experts! 🙏
Finally it's ready to merge! Before I merge it, are there any changes you would like to make or suggest?
Whenever you're ready. Thanks for all your hard works
@guansss 大佬快合并吧,期待发包❤️
Eagerly awaiting merge!