English


Profile Picture

Remove / Replace / Mask Out LiveFace lip data (how to blend facial performance with AccuLips AFTER...

Posted By Parallel World Labs 2 Years Ago
You don't have permission to rate!
1
2

Remove / Replace / Mask Out LiveFace lip data (how to blend facial...

Author
Message
Parallel World Labs
Parallel World Labs
Posted 2 Years Ago
View Quick Profile
Junior Member

Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 15, Visits: 130
I just thought I'd come back to this thread for posterity as I have found a solution that seems to really work in my use case, which I'll document below:

1. I capture all facial animation using the LiveMotion / LiveFace plugin. If I know I'll be blending viseme and facial expression, then I turn off facial mocap for just the JAW section prior to recording. This is done in the settings pane when you select the facial mocap and associate it with the character. By default all facial muscle groups are selected for capture, but you can click on different sections to deactivate capture for just those sections. This is not strictly necessary, but it saves a step later on.

2. After capture is complete, I perform the usual steps of adding an AccuLips animation layer to the character's animation, using the written script and captured audio track. I temporarily set the Expression value to 0 using the Modify panel, with Viseme set to 100 so I can fully concentrate on getting the generated lip synch as close as possible. This generates a highly accurate, if somewhat expressionless lip performance.

3. After that, I set expression to 100 and viseme to 100, and scrub through my animation looking for trouble spots. This usually involves areas where the lips pucker or stretch, because the captured facial mocap is added to the viseme morph targets, often resulting in > 100% morphing, which looks, well, horrendous in most cases.

4. Next, using the techniques helpfully provided by experts in the post above, I sample the expression layer, so that I have access to the curve editor. It's also a good idea to sample optimized here, to slightly speed up the next step.

5. The next step is to open the curve editor and determine which morph targets are causing the greatest issues, and reduce them dramatically. You can do this for the entire performance or for just a section, as needed and depending on the amount of time you want to spend. Typically I'll scrub to find the most offensive section and try to isolate which expression tracks are responsible for the issue by first examining the curves and trying to visually identify a curve that peaks at that exact frame. If that is inconclusive, I'll systematically delete curves until I find the tracks responsible.

6. Now that I have found the tracks that interfere the most with the viseme animation, I activate them, select all the points and first perform an Optimize to reduce the number of control points - this step is critical to speed up the processing, which can become unbearably slow.

7. With the offending curve tracks optimized, I then Simplify them as a preliminary step. I usually use a fairly high value here, as I will be discarding a lot of data anyway. Typically a value above 20.

8. As a last step then, and with the optimized and simplified control points selected, I use the Scale tool and scale everything down to about 25%. I find this allows me to still keep some of the subtleties of the performance, while eliminating the exaggerated effect caused by the additive animation layers.

I hope this helps others. If you have a workflow you think could be even more effective, by all means please share!
mrtobycook
mrtobycook
Posted 2 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)Distinguished Member (2.8K reputation)

Group: Forum Members
Last Active: Last Week
Posts: 425, Visits: 2.7K
This thread has three of the most knowledgeable people responding - Mike, Rampa and animagic I wish I could have all your knowledge somehow imprinted on my brain lol. (Plus Bassline’s!) :)

- - - - - - - - - - - - - - - - - - - - - - - - - - - - 
75% of original size (was 667x19) - Click to enlargehttps://forum.reallusion.com/uploads/images/d11fc97b-7387-4f19-bb1e-0785.png
virtualfilmer.com | youtube

Parallel World Labs
Parallel World Labs
Posted 2 Years Ago
View Quick Profile
Junior Member

Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 15, Visits: 130
Thanks for the continued tips and advice. The problem with setting viseme and expression to 50% is that while it does blend the lip animations, it also dampens the rest of the facial animations.

But your comment has made me wonder whether I can't exaggerate the morph targets that are not dealing with the lips and then globally tone everything down with the expression slider...
Rampa
Rampa
Posted 2 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)

Group: Forum Members
Last Active: Last Week
Posts: 8.2K, Visits: 62.5K
Changing the talking and smoothing settings is by clip, so go ahead right-click-break your viseme clip at the places you need it to change. Then set each clip.

For blending, use both the expression and viseme together. You will need to reduce the strength of both, because they are driving the same mesh. Otherwise you will see the the overdrive as you have before. So try setting both expression and viseme to 50%.
Parallel World Labs
Parallel World Labs
Posted 2 Years Ago
View Quick Profile
Junior Member

Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 15, Visits: 130
Yes, I have been playing with the Talking Style editor - primarily to pick the "Enunciating" profile. 

Unfortunately it does the OPPOSITE of what I need - it allows you to exaggerate or reduce the blend of your various Viseme morph targets, instead of blending the mouth deformations of the motion capture data.

But can it be keyframed? I also couldn't figure out how to keyframe the "Lip Options" track.
Rampa
Rampa
Posted 2 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)

Group: Forum Members
Last Active: Last Week
Posts: 8.2K, Visits: 62.5K
Visemes can be adjusted with the global strength slider in the animation tab of the modify panel, or by right-clicking the viseme track and selecting "Talking Style Editor" to adjust the different viseme strengths, or by double-clicking each viseme separately to edit them one by one.
75% of original size (was 667x19) - Click to enlargehttps://forum.reallusion.com/uploads/images/256a9cf7-2551-4d32-9193-c049.png
Parallel World Labs
Parallel World Labs
Posted 2 Years Ago
View Quick Profile
Junior Member

Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 15, Visits: 130
@Rampa - oh wow! Okay, now we're talking. I'm going to try that as soon as I get back to my workstation. The key is to Sample the clip first, which I guess turns the data into keyframes which you can then edit in the curve editor. If that works, it's exactly what I need!

I've been trying all day to use either the face puppet or the LiveFace mocap to zero out just the lips using masking. The results have been inconsistent! In one clip I was able to completely remove all lip expressions using Replace. But in my most recent clip (captured exactly the same way) I can't get rid of the expressions completely - I can reduce them but they remain.

The problem really is that the AccuLips visemes are additive to the expression capture - not blended. So if you're talking in your face capture and your mouth is open, it opens twice as wide when the visemes are layered on top!

I wonder whether the sampling trick could also work for Viseme tracks - flattening the result so you can then edit with the curves. It would be very powerful if you could.


Rampa
Rampa
Posted 2 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)Distinguished Member (37.5K reputation)

Group: Forum Members
Last Active: Last Week
Posts: 8.2K, Visits: 62.5K
You can use the curve editor. Right-click on the track (expression in my picture) and sample. Right-click a second time and select "Curve Editor".
75% of original size (was 667x19) - Click to enlargehttps://forum.reallusion.com/uploads/images/c6302d6d-22fc-454b-a4c5-661b.png

Note there is also a smoothing option in the right-click. Scroll down the list in its window. You can basically smooth elements of the expression to whatever percentage you want. Probably quicker than using the curve editor.
75% of original size (was 667x19) - Click to enlargehttps://forum.reallusion.com/uploads/images/e59b7be6-6845-403c-9b5b-3172.png
animagic
animagic
Posted 2 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)

Group: Forum Members
Last Active: 26 minutes ago
Posts: 15.8K, Visits: 31.3K
One difference between body animation and facial animation is that one is bone-based and the other morph-based so obviously they are separate systems.

To make it one system would make things just more convoluted and unworkable in my opinion.

If you have specific requests, you can always add them to the Feedback Tracker. There's is little point in questioning why things are the way they are.





Parallel World Labs
Parallel World Labs
Posted 2 Years Ago
View Quick Profile
Junior Member

Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)Junior Member (160 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 15, Visits: 130
One obvious way to achieve what I want is to use Animation Layers and weights along with multiple passes of masked-out motion capture or face puppetting BUUUT so far I have not found a way to get facial animation into the Animation Layers.

I don't for the life of me understand why iClone makes this distinction between body part motion and facial motion - it's all animation, why not use the same system?

Similarly, what about curve editing for captured facial motion data? Why can I not unlock the base Layer, and edit facial animation curves just like you can with motion data?

1
2



Reading This Topic

0 active, 0 guests, 0 members, 0 anonymous.
No members currently viewing this topic!