Profile Picture

How to Drive a Model in Unity Using Audio2Face's BlendShape Data

Posted By coco.li_317067 Last Year
Rated 5 stars based on 1 vote.

How to Drive a Model in Unity Using Audio2Face's BlendShape Data

Author
Message
coco.li_317067
coco.li_317067
Posted Last Year
View Quick Profile
New Member

New Member (79 reputation)New Member (79 reputation)New Member (79 reputation)New Member (79 reputation)New Member (79 reputation)New Member (79 reputation)New Member (79 reputation)New Member (79 reputation)New Member (79 reputation)

Group: Forum Members
Last Active: Last Year
Posts: 2, Visits: 64

Help Needed! How to Drive a Model in Unity Using Audio2Face's BlendShape Data

I've successfully imported a model from Character Creator (CC) into Audio2Face (A2F), exported the BlendShape (BS) files based on audio, imported them into iClone (IC), and finally generated a model and animation usable in Unity by following an online document: https://docs.google.com/document/d/14741Y6htnBK70FhMobQmaMdhhhycSTeeOqLGHzSVfak/edit#heading=h.7ga5z83rsrb8. The results in Unity look great. However, this method is offline and doesn't allow real-time audio-driven model animation.

Next, I tried using the three BS files exported directly from A2F to drive the CC model in Unity. Although these BS files can drive the lip movement, the results are significantly worse compared to the animations. A closer look at the animation content reveals that, besides the original BS content, there is a lot of additional animation data added during the export from IC, which plays a crucial role in the final effect. However, this plugin in IC does not seem to be open-source. Does anyone have any good suggestions for using the BS files exported from A2F to naturally drive the CC model directly in Unity?

Any help would be greatly appreciated!





Reading This Topic