While exporting morph could greatly help many devs fast-forwarding in the development of their game(s) project(s), I got to point out that there are also a TON of possible ways of handling morphs which, in most cases, also means the morph can't be just "made and done" universally. Because of that, the current "best" way of handling morphs is to use CC3 (or iClone) as a base, then modify the pre-result through a 3D software like blender, Maya or 3Ds Max (yeah yeah, I know).
For example, you don't morph the whole body in any game engine as that's simply a huge waste of CPU over the Vector3 calculation prior to the rendering pipeline (done through the GPU). You usually use a mix of in-engine mesh manipulation (cloth for example) with an armature-based rig (bones) with "some" parts done done with a morph (face). Obviously, there are exceptions, but that's the general way of handling it. You usually try to minimize the morphed area to as few vertex as possible because the cost of a morphed vertex is at least 3x bigger than that of a skinned vertex (make it 4x if it's a skinned + morphed vertex). Bones are still highly used for anything "repetitive" and general.
To make it work, you got to think of thinks as tools and steps and not just whole-solutions. Having some knowledge in using mesh data in the game engine is also an HUGE plus.
In my case, I use CC3 initially for building 4 version of a character (per gender) with 1 single base (same amount and IDs in the vertex), rig only one of those (no morph yet) in Blender, export the 3 non-rigged + 1 rigged into Unity.
In Unity, I generate each character as a brand new skinned mesh, by generating the same vertex values (vertex position, triangles order, UVs, etc.) as each of the 4 variances, but by multiplying the value based on a Vector2(x,y) where X is the value between variance A and B, and y is between C and D. Then, I apply the same weight values as I registered one of the 4 variance on each vertex by their Vertex IDs. (Remember, each of the 4 variances used the same base, meaning they got the same vertex in the same order as well as the same UVs, but just placed differently in their vertex positions.)
This allow an quite simple flow and same you a LOT on performances when it comes on creating character as you don't really have a skinned + morph mesh, but just a skinned mesh generate morphed purely once via script.
What about morph movements then like some lips-sync? You can use a old-fashioned bone-based mouth animation or... generate your own blend shapes!
With unity, you can add new blend shapes
Link to the Unity Document about adding Blendshapes.
To add a blend shape, use the code shown in the link above twice with the same (yet unique) string name (called
shapeName in the link's declaration above).
The weight is basically where, on a 0-100% value, the blend shape is located where so, yeah, you can add multiple "shapes" in line. (It has to be added in the proper order, so you can't add a blend shape at 25% if you added on at 50%.
You can literally take the each of the "blendshape" source (which are regular mesh) and feed their data to the AddBlendShapeFrame() call.
I know I make it sound simple, but it's not necessary that simple. To add proper blend shapes through script on a pre-morphed-now-regular-skinned-mesh, you got to take into account the morphed mesh, hence you have to generate the same morphed data (like mouth animation) for each of the variances, then as you add the blend shape, you got to multiply the value by each of the variants the same way as the regular non-blend-shapped way explained above. At the same time, using this method can be used for specific characters who needs it and avoid the waste on characters that don't need a proper morph.
In my explanation above about having 4 variants, I'm usually using that for having A and B as age-variance of a skinny version while C and D are the muscular version of the same age-variance.