Profile Picture

Synchronizing pre-recorded Body, Facial Mocap + audio

Posted By Rates 5 Years Ago
You don't have permission to rate!
Author
Message
Rates
Rates
Posted 5 Years Ago
View Quick Profile
Junior Member

Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)

Group: Forum Members
Last Active: 3 Years Ago
Posts: 6, Visits: 47
Hello,

Could some one give a high-level overview of what the workflow would be for bringing in prerecorded (i.e not live) body mocap, facial mocap and audio into iclone, and then applying this data to a Character Creator character?

I have body mocap performance recorded with optitrack (.fbx with data applied to optitrack skeleton), facial performance recorded with faceware head mounted camera system ( image sequences, but faceware tracking data is also available) and audio (.wav files) recorded to a separate device. All of this data has matching timecode. I want to retarget this data to a Character Creator Character, and have all of it synchronized correctly. I will then export this character + anim to Unreal Engine. Is it possible to do this workflow with Reallusion products?

Thanks!
Kelleytoons
Kelleytoons
Posted 5 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)

Group: Forum Members
Last Active: Yesterday
Posts: 9.1K, Visits: 21.8K
Well, the high level view is you just do it -- it's kind of (sort of) how I do all my animation with iClone (I record the body and facial mocap separately body mocap done with the PN, facial mocap done inside of iClone with either Faceware or Live Face, then bring them in to iClone and then export by frame to my video editor, where I bring in the audio for the scene).

Other than audio it's pretty easy to eyeball the body and facial mocap so it looks right -- with the body mocap I capture a T-pose at the start of the click track, which I can then use to line up (and end up cutting this pre-roll out in my video editor).  In the editor it's a bit trickier to get the sync right with the facial mocap but because I have captured the track inside of iClone for the tongue movements I can sync to this track.  Without it you can also eyeball it (just like we used to do for all lipsync -- the eyes are VERY good at getting sync right).

So that's a high level view, but I expect what you really want is a LOW level view of exactly how to bring that data through.  For that you'd need 3DX to bring the FBX body cap (translate to RLmotion files).  Not sure how you'd use the Faceware if you don't have the Faceware for iClone, but perhaps others can comment on that.



Alienware Aurora R12, Win 10, i9-119000KF, 3.5GHz CPU, 128GB RAM, RTX 3090 (24GB), Samsung 960 Pro 4TB M-2 SSD, TB+ Disk space
Mike "ex-genius" Kelley
Rampa
Rampa
Posted 5 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (35.8K reputation)Distinguished Member (35.8K reputation)Distinguished Member (35.8K reputation)Distinguished Member (35.8K reputation)Distinguished Member (35.8K reputation)Distinguished Member (35.8K reputation)Distinguished Member (35.8K reputation)Distinguished Member (35.8K reputation)Distinguished Member (35.8K reputation)

Group: Forum Members
Last Active: 21 minutes ago
Posts: 8.1K, Visits: 60.5K
Do you have the full Faceware?
The Faceware Realtime for iClone?

If you have the Realtime For iClone, you could just recapture from the image sequences.
If you have the full Faceware package to stream the data itself, I have an inquiry in with devs if that can be done. It would also require the Realtime For iClone, if it can be done.
Rates
Rates
Posted 5 Years Ago
View Quick Profile
Junior Member

Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)

Group: Forum Members
Last Active: 3 Years Ago
Posts: 6, Visits: 47
Thanks, we have the full Faceware package and are looking into investing in the Reallusion toolset. I should have mentioned that the facial and body performances + audio are recorded simultaneously with matching timecode. The thing that is making me hesitant about the Reallusion workflow is that it looks like the pre recorded Facial has to be streamed into iclone via image sequence with no obvious way of syncing it with body and audio other than matching it by eye. There doesn’t even seem to be a way to mark the facial stream where the first frame of .png image sequence was started in faceware live. That at least would allow for a quick lineup.

What we want is a workflow that will allow us to record facial, body and sound on the stage simultaneously, then later apply that data synchronized to Character Creator Characters. It seems what is missing in the Reallusion workflow is either ideally A) support for time code on all 3 of body, facial and audio, or B) at least a way to mark the first frame from an image sequence when streamed into iclone with faceware live.



Reading This Topic