kavise100
|
kavise100
Posted Last Year
|
Group: Forum Members
Last Active: Last Year
Posts: 44,
Visits: 257
|
Hello,
I have iClone8 and I'm thinking of getting accuface. I'm assuming you can add an audio track to a project, then listen to it play while you record a facial performance utilizing acculips and accuface. Is this correct?
I'm asking because the tutorials I've watched skip over how you would import and listen to an audio track while you do a facial performance. They tend to focus on the video importing method or recording audio, not how to work with pre-recorded audio.
Thanks much for any info
|
Peter (RL)
|
Peter (RL)
Posted Last Year
|
Group: Administrators
Last Active: Last Year
Posts: 23.1K,
Visits: 36.6K
|
Hi...
While you can manually add audio, one of the big benefits of AccuFACE is that it allows single pass audio sync. Unlike most facial mocap solutions that require separate audio alignment to fully synchronize the voice and the performance, AccuFACE captures both audio data and facial animation in one single pass. This applies to both live webcam or pre-recorded video.
Regardless of the source media framerate AccuFACE engine will generate consistent audio-synced animations according to the source framerate.
Peter Forum Administrator www.reallusion.com
|
Kelleytoons
|
Kelleytoons
Posted Last Year
|
Group: Forum Members
Last Active: Last Year
Posts: 9.2K,
Visits: 22.1K
|
kavise100 (5/14/2024)
Hello,
I have iClone8 and I'm thinking of getting accuface. I'm assuming you can add an audio track to a project, then listen to it play while you record a facial performance utilizing acculips and accuface. Is this correct?
I'm asking because the tutorials I've watched skip over how you would import and listen to an audio track while you do a facial performance. They tend to focus on the video importing method or recording audio, not how to work with pre-recorded audio.
Thanks much for any info I'm WAY too late for this party but just in case you subscribed... yes, you can do EXACTLY what you want here, and indeed it's one of my workflows to use the recorded audio and add facial expressions to it. The visemes (for lip movements to "talk") and expression tracks are two separate things which can overlap (IOW, you can "smile" with your mouth and still have the visemes made correctly so it looks accurate as if someone is smiling while they talk). You can do this in either order - I prefer to lip sync first with Acculips and then go ahead and add the Expression on top, but you can do it in the reverse order as well. And you can either "lip sync" (which is to say you can speak the same words) or you can just make expressions and not worry about the words, because the visemes will be generated with Acculips (I prefer this method although sometimes I also lip sync). Any of the facial animation plugins will work this way, although I prefer Live Face (but AccuFace will work as well - the advantage of Accuface is it's asymmetrical, whereas Live Face can't do expressions like that. The disadvantage is that AccuFace is a bit "twitchier" and I've yet to find a good way to smooth it out).
Alienware Aurora R16, Win 11, i9-149000KF, 3.20GHz CPU, 64GB RAM, RTX 4090 (24GB), Samsung 870 Pro 8TB, Gen3 MVNe M-2 SSD, 4TBx2, 39" Alienware Widescreen Monitor Mike "ex-genius" Kelley
|