I understand that Live face uses the iPhone (w/ARKit) while Faceware uses a webcam/camera with their proprietary plugin.
Does the Iclone version of faceware also use blend shapes similar to LiveFace? If yes, then what additional benefits does Faceware provide compared to the new LiveFace? I know that the stand alone version of Faceware uses its proprietary software to track points on the face and retarget the mocap to a 3d animation rig but the iClone version appears to use blend shapes in the end. Can you achieve more accurate facial expressions with faceware or will the new LiveFace achieve similar or even better results?