info_660836 (3/12/2024)
Hi;
Has anyone tested both?
Which would be the most efficient?
cdt,
daniel
Just my opinion but the ideal facial animation solution is one that will give you a base layer layer of just phoneme accurate mouthing and eye blinks and your animation program has really good manual face controls for you to add final nondestructive layers of polish with expressions added based on the context of the dialogue.
NVIDIA Audio2face is more like a super advanced version of other audio based “lip synching" options like Daz mimic but still can’t really predict emotional context.
I used the basic Daz studio (audio based) mimic to create this mouthlipsynch and imported the animated shape key data into blender with a free DAZ plugin Called diffeomorphic.
even though his body has been re-rigged with Auto rig pro
But diffeo has a complex facial shape key animation interface where you can “Dial in” emotional expressions on a separate layer at any point on top of the basic ( Audio Based) lip sync.
Diffeo also supports UE5 “live link” CSV data for Daz genesis figure 8.1-9 in Blender.
I grabbed a random video online of some Australian guy talking and ran it thru the FREE “face landmarker” app and exported the CSV into blender directly to a genesis 8.1 male as Genesis 8.1-9 has a FACS rig with the 52 ARkit shapes built in.
Obviously alot different from the exaggerated “pixar’ style
but the dialogue has a more serious context so no one solution will work for every scenario IMHO
Accu Face is too expensive.
At $500 it is literally $100 short of buying a second seat of Iclone.
But it seems to be completely in house developed so Reallusion has to recoup its development costs somehow and those who prefer to stay within the Iclone ecosystem may find value in it particularly as it does not require Iphone depth cam hardware for the face capture.