How to Drive a Model in Unity Using Audio2Face's BlendShape Data


https://forum.reallusion.com/Topic553064.aspx
Print Topic | Close Window

By coco.li_317067 - Last Year

Help Needed! How to Drive a Model in Unity Using Audio2Face's BlendShape Data

I've successfully imported a model from Character Creator (CC) into Audio2Face (A2F), exported the BlendShape (BS) files based on audio, imported them into iClone (IC), and finally generated a model and animation usable in Unity by following an online document: https://docs.google.com/document/d/14741Y6htnBK70FhMobQmaMdhhhycSTeeOqLGHzSVfak/edit#heading=h.7ga5z83rsrb8. The results in Unity look great. However, this method is offline and doesn't allow real-time audio-driven model animation.

Next, I tried using the three BS files exported directly from A2F to drive the CC model in Unity. Although these BS files can drive the lip movement, the results are significantly worse compared to the animations. A closer look at the animation content reveals that, besides the original BS content, there is a lot of additional animation data added during the export from IC, which plays a crucial role in the final effect. However, this plugin in IC does not seem to be open-source. Does anyone have any good suggestions for using the BS files exported from A2F to naturally drive the CC model directly in Unity?

Any help would be greatly appreciated!

By wilg - Last Year
I would also very much like to use the files directly. I started working on an importer from these files, but I can't really figure out what is wrong with the animation data. I have no idea if I'm misunderstanding something about the format (which is undocumented) or what.

Here's what I have, if anybody has ideas I would appreciate it!

https://gist.github.com/wilg/ede3d319b9815f2978624978d4e14e89