English


Profile Picture

Remove / Replace / Mask Out LiveFace lip data (how to blend facial performance with AccuLips AFTER...

Posted By Parallel World Labs 2 Years Ago
You don't have permission to rate!
1
2

Remove / Replace / Mask Out LiveFace lip data (how to blend facial...

Author
Message
Parallel World Labs
Parallel World Labs
Posted 2 Years Ago
View Quick Profile
Junior Member

Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 15, Visits: 130
Yes, I have been playing with the Talking Style editor - primarily to pick the "Enunciating" profile. 

Unfortunately it does the OPPOSITE of what I need - it allows you to exaggerate or reduce the blend of your various Viseme morph targets, instead of blending the mouth deformations of the motion capture data.

But can it be keyframed? I also couldn't figure out how to keyframe the "Lip Options" track.
Rampa
Rampa
Posted 2 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)

Group: Forum Members
Last Active: 3 days ago
Posts: 8.2K, Visits: 62.5K
Changing the talking and smoothing settings is by clip, so go ahead right-click-break your viseme clip at the places you need it to change. Then set each clip.

For blending, use both the expression and viseme together. You will need to reduce the strength of both, because they are driving the same mesh. Otherwise you will see the the overdrive as you have before. So try setting both expression and viseme to 50%.
Parallel World Labs
Parallel World Labs
Posted 2 Years Ago
View Quick Profile
Junior Member

Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 15, Visits: 130
Thanks for the continued tips and advice. The problem with setting viseme and expression to 50% is that while it does blend the lip animations, it also dampens the rest of the facial animations.

But your comment has made me wonder whether I can't exaggerate the morph targets that are not dealing with the lips and then globally tone everything down with the expression slider...
mrtobycook
mrtobycook
Posted 2 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)

Group: Forum Members
Last Active: Last Week
Posts: 425, Visits: 2.7K
This thread has three of the most knowledgeable people responding - Mike, Rampa and animagic I wish I could have all your knowledge somehow imprinted on my brain lol. (Plus Bassline’s!) :)

- - - - - - - - - - - - - - - - - - - - - - - - - - - - 
75% of original size (was 667x19) - Click to enlargehttps://forum.reallusion.com/uploads/images/d11fc97b-7387-4f19-bb1e-0785.png
virtualfilmer.com | youtube

Parallel World Labs
Parallel World Labs
Posted 2 Years Ago
View Quick Profile
Junior Member

Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 15, Visits: 130
I just thought I'd come back to this thread for posterity as I have found a solution that seems to really work in my use case, which I'll document below:

1. I capture all facial animation using the LiveMotion / LiveFace plugin. If I know I'll be blending viseme and facial expression, then I turn off facial mocap for just the JAW section prior to recording. This is done in the settings pane when you select the facial mocap and associate it with the character. By default all facial muscle groups are selected for capture, but you can click on different sections to deactivate capture for just those sections. This is not strictly necessary, but it saves a step later on.

2. After capture is complete, I perform the usual steps of adding an AccuLips animation layer to the character's animation, using the written script and captured audio track. I temporarily set the Expression value to 0 using the Modify panel, with Viseme set to 100 so I can fully concentrate on getting the generated lip synch as close as possible. This generates a highly accurate, if somewhat expressionless lip performance.

3. After that, I set expression to 100 and viseme to 100, and scrub through my animation looking for trouble spots. This usually involves areas where the lips pucker or stretch, because the captured facial mocap is added to the viseme morph targets, often resulting in > 100% morphing, which looks, well, horrendous in most cases.

4. Next, using the techniques helpfully provided by experts in the post above, I sample the expression layer, so that I have access to the curve editor. It's also a good idea to sample optimized here, to slightly speed up the next step.

5. The next step is to open the curve editor and determine which morph targets are causing the greatest issues, and reduce them dramatically. You can do this for the entire performance or for just a section, as needed and depending on the amount of time you want to spend. Typically I'll scrub to find the most offensive section and try to isolate which expression tracks are responsible for the issue by first examining the curves and trying to visually identify a curve that peaks at that exact frame. If that is inconclusive, I'll systematically delete curves until I find the tracks responsible.

6. Now that I have found the tracks that interfere the most with the viseme animation, I activate them, select all the points and first perform an Optimize to reduce the number of control points - this step is critical to speed up the processing, which can become unbearably slow.

7. With the offending curve tracks optimized, I then Simplify them as a preliminary step. I usually use a fairly high value here, as I will be discarding a lot of data anyway. Typically a value above 20.

8. As a last step then, and with the optimized and simplified control points selected, I use the Scale tool and scale everything down to about 25%. I find this allows me to still keep some of the subtleties of the performance, while eliminating the exaggerated effect caused by the additive animation layers.

I hope this helps others. If you have a workflow you think could be even more effective, by all means please share!

1
2



Reading This Topic

0 active, 0 guests, 0 members, 0 anonymous.
No members currently viewing this topic!