We were playing around last night and realized I really have no clue hahaha
Traditionally we use a slate or clapper to help us sync the sound and video together when shooting a film. In this mocap process with iclone where the audio, body, and facial is done as a separate process what techniques are people using to sync up all their stuff?
Should one follow a traditional animation approach where audio is grabbed first so that the acting can be animated based on the audio?
Should the audio be captured at the same time as the mocap? If so how do you guys keep it in sync? Does iclone has a real time audio interface that can be used with the mocap?
I'd be interested in hearing from your experiences and how you'd solved some of this issues.
So far what i did is revert to a basic clap sound and gesture combo where the sound of the clap can be synced to the motion of the character clapping. pretty old school but works fine. is there a more efficient approach?

Ibis Fernandez | (available for hire)
----------------------------------------------------------------------------------------
Professional Animator, Filmmaker | Creator of the highest quality (modular) G2 rigs for cartoon animator and developer of Toon Titan and Puppet Producer
Author of Flash Animation and Cartooning: A Creative Guide
>>> be sure to check out http://toontitan.com for professional grade assets, templates, and custom tools for Cartoon Animator and more.