For example if you look at api docs
https://wiki.reallusion.com/IC_Python_API:RLPy_RIBodyDevice how is the
t_pose_data mapped?Examplebone_list = [hips, rightupleg, rightleg]
t_pose_data = [0.0, 105.85, 0.0, 0, 0, 0, -11.5] do the first 3 numbers correspond to the hips x,y,z position?How about the fram1_data = [-0.05, 106.89, -3.65, -6.91, 173.25, -1.78]?If I have a csv file with data format like below Body.translateX | Body.translateY | Body.translateZ | Body.rotateX | Body.rotateY | Body.rotateZ | Hips.translateX | Hips.translateY | Hips.translateZ | Hips.rotateX | Hips.rotateY | Hips.rotateZ |
-15.337641 | 103.876595 | 7.788717 | 0 | 0 | 0 | -15.938843 | 100.31266 | 8.859345 | -0.310804 | -2.850941 | 104.551296 |
-15.336495 | 103.875458 | 7.831433 | 0 | 0 | 0 | -15.934213 | 100.309464 | 8.910808 | -0.317408 | -2.851971 | 104.591611 |
-15.337191 | 103.874374 | 7.869067 | 0 | 0 | 0 | -15.931573 | 100.306335 | 8.957129 | -0.32407 | -2.850415 | 104.629859 |
How do I map this?Can you also explain the facial format? https://wiki.reallusion.com/IC_Python_API:RLPy_RIFacialDeviceHow do I interpret this?head_data1 = [0.3, 0.4, 0.5]left_eye_data1 = [0.4, 0.5]right_eye_data1 = [0.4, 0.5]bone_data1 = [0.3, 0.4, 0.5, 0.3, 0.4, 0.4, 0.3, 0.4, 0.5, 0.3, 0.4, 0.4]morph_data1 = [0]*60custom_data1 = [0]*24What is the bone_data?What is the morph_data?From what I can find out the face is strictly driven by morphs instead of facial bones, is this correct?I have a file with 486 3D facial keypoints predicted per frame like below. Keypoints(landmarks) is an array of xzy coordinates starting from 1-486. Do I just map a groups of these to a morph? Example if I define keypoints 1-10 to be the left cheek group then I can map that to a left cheek morph?
[ { "faceInViewConfidence": 1, "frame": 1, "timestamp": 1594600292956, "keypoints": [ [ 327.48779296875, 226.72177124023438, -10.546834945678711 ], [ 327.5000915527344, 218.25994873046875, -28.060033798217773 ]]}]
Thanks!