Profile Picture

ue4 text to lip sync Phoneme morph targets?

Posted By macw0lf 6 Years Ago
You don't have permission to rate!
Author
Message
macw0lf
macw0lf
Posted 6 Years Ago
View Quick Profile
Senior Member

Senior Member (330 reputation)Senior Member (330 reputation)Senior Member (330 reputation)Senior Member (330 reputation)Senior Member (330 reputation)Senior Member (330 reputation)Senior Member (330 reputation)Senior Member (330 reputation)Senior Member (330 reputation)

Group: Forum Members
Last Active: 4 Years Ago
Posts: 3, Visits: 21
Hi,
I have been looking at an interesting UE4 marketplace plugin to help with doing text to speech - https://www.unrealengine.com/marketplace/text-to-lip-sync
The issue I am having is that the plugin expects phoneme based morph targets AE, AH, EE, ER etc. and allows you to map those to its phonemes.. problem is that I dont see anything like this coming out of the CC2 FBX when exporting to UE4.. it has a lot of morph targets, but they look composite, or more generic?

Is there anyway to get the AE, AH, EE that CC2 and iClone allow you to use to get into the FBX to be available for tools like these?

More info here: https://forums.unrealengine.com/unreal-engine/marketplace/1439993-subtitles-based-lip-sync

Thanks
Stuart

Edited
6 Years Ago by macw0lf
animagic
animagic
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)

Group: Forum Members
Last Active: Yesterday
Posts: 15.7K, Visits: 30.5K
I'm not familiar with UE4, but reading the description the idea seems to be to use text in assisting with creating the lip-syncing for a speech file? Otherwise you would just have a moving mouth with no sound.

The phoneme morph targets in iClone would be composites. I believe there are about 60 or more morph components for the face, which are used to create the mouth shapes (visemes) for the speech.

In part because there is blending between visemes to get a fluid animation, it would not be easy to separate them out into different morph targets. So what happens in iClone is dynamic, whereas you are looking for static morph targets that represent specific phonemes, if I understand correctly.

Also, is the aim of what you are doing to do the lip-syncing in UE4 using the method you describe?


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg




Reading This Topic