Hello everyone!
For our VR application we use "Motion Live" to capture most of our character performances. I wanted to try the alternative workflow with letting Audio2Face handle the basic movements and then adding to it. Just like in this video:
https://www.youtube.com/watch?v=cYexMlvGHlEGetting everything to work in Audio2Face was pretty straight forward but unfortunately the exported blendhshape conversion does look nothing like the generated capture in the omniverse software...
The most common issue is, that the mouth won't open at all even when i check everything in the blendshape solver and set the "Jaw Open" value to 200% in the JSON re-import.
I tried multiple blendshape presets from the "Rigged 3D Characters Pack" aswell as the ones delivered with the Audio2Face Plugin. I also tried checking and unchecking different options and setting all parameters, like "Temporal Smoothing", manually. The outcome is more or less the same.
The character type is a CC3_Base_Plus Model but i have tried with another one aswell.
Is there any way, i can get the blendshape solve to look more like in the Audio2Face software or is this the way it is for now?
Thank you for any input you might have!
Best regards,