I would like to suggest a facial animation workflow be included in CTA 3. With most webcam apps having the ability to recognize facial features and gestures and many modern video programs making use of such features, I believe that if CTA 3 is to be at the very least on par with the rest of the industry, it really needs to add a facial a recognition feature to allow people the ability control their characters faces and gestures using a webcam. Its only matter of time before the competitors begin to work with multidimensional figures, After Effects already has a decent facial capture system and if were gonna be able to continue going up, I think an upgrade to the facial features should provably be on the highest of priorities.
What i envision is a a system similar to the auto script function but combined with with the facial puppeteer panel. Where by the actor can speak into a camera (or prerecorded video of the actors face can be captured, such as from a gopro). The panel would then analyze the camera or video footage, and match the facial gestures and phonemes.
This can also provably be done as an upgrade to CrazyTalk and and just make the resulting scripts be compatible with CTA3. But since CrazyTalk has just recently been released I don't think, this is something that could feasible be done within a minor CT revision and have it in time for CTA3. So I think provably the best approach is to build directly into CTA3 itself and do not leave it for CTA4. I means its unavoidable, RL is gonna have to do it eventually, it mind as well do it now it is still ahead of the curve in terms of character puppeteering and such.
I'm sure you all have seen what Adobe is doing, but heres a link. Imagine what can be done when something similar can combined with the power of CTA's body movement features.http://blogs.adobe.com/creativecloud/character-animator-preview-4-makes-animation-easier/