|
By Kelleytoons - 8 Years Ago
|
So I bought the iPhone X plugin (could never get the demo to install but even if I had it wouldn't have been much use to evaluate it, as you have to record *something* to test -- really weird of RL to make a timed, 30 day demo that doesn't do anything, as either you should be able to record *something* or it shouldn't be timed). I kind of wish I hadn't, now.
Edit: I'm now on the fence whether this is better than Faceware or not. Perhaps I just need some more adjustments here, but I kind of doubt it. The one wild card to all this is I'm using Zane to test, and the sample tutorial RL posted uses a completely new character which APPEARS to be better detailed. I'm hoping this is CC3, and that's what we'll be getting and, if so, I'm hoping it means that the Faceshift thingee works better with it (but then again, shouldn't Faceware work better, too?) The sample recording in that tutorial sure looks better than what I could get with Zane, though.
What I find is that Faceware seems smoother, while Faceshift seems more squirrely. I actually had to make several adjustments to the sliders just to get it to look even *this* good (whereas, again, Faceware looks better out of the box with no adjustments). But I have to make a lot more tests to get some conclusive results.
First test here is with the defaults:
Next test is adjusting the jaw (which moves WAY too much without adjustments) and toning down the overall strength:
Last shows Faceware comparison using the same setup:
There's at least one advantage here -- I can use Faceshift with my "thick" glasses on (the ones I normally use on the computer) whereas Faceware won't work with those. The tradeoff is you can't use video with Faceshift, though.
|
|
By Kelleytoons - 8 Years Ago
|
Of course now, after looking at the three of these in a row, I do find I kind of like the "adjusted" Faceshift version a bit better.
It's a tough call because when I try and smile the Faceware definitely handles it better -- Faceshift moves the lips in an awkward position. However, when I'm not smiling I like the Faceshift better. Sigh -- perhaps I need to diddle with the settings for the smile itself, but what a PITA that would be for dozens of different characters.
Again, the wildcard in all of this is -- does the CC3 character have a different/better face? That's what we don't know, and that's a pretty significant piece of the puzzle. I don't think (and I'm SO ignorant on this point) that bringing in, say, a Daz 8 character will give us a better facial profile to work with, but perhaps. That's my next set of tests -- I'll convert a Daz character AND I'll use USB (all the tests I've done so far are with wi-fi -- although it DOES say it should operate smoother with USB I find that hard to believe given that the video tutorial RL posted was using wi-fi and it looked fine).
|
|
By Peter (RL) - 8 Years Ago
|
|
Thanks for sharing your findings KT. It's good to see a hands on comparison like this from someone who has both systems. :)
|
|
By Kelleytoons - 8 Years Ago
|
Here's a few more tests, with a Daz 8 figure brought in automagically through XChange into iClone.
In both tests the texture of the teeth (the too-whiteness of them) kind of throws off the ability to evaluate the mocap data, but one things is clear to me -- Faceware works better "out of the box". Faceshift needs tweaking for the character. I do think the proper tweaking can result in VERY good data, but that's a bit of a PITA and I'm hoping the answer is more of a general "This is a Daz 8 Male Preset" and "This is a CC3 Male Preset" versus "This is Oscar's Preset" and "This is Ted's Preset" because if the latter you could spend a LOT of time working on presets for the various avatars you create.
(And if anyone wants to jump in and help me out here with some ideas I'd love to share -- right now my basic procedure is to turn down the jaw left/right rotations from 30 to around 15, and reduce overall strength to 85 or so, but I need to pin down lip/jaw-up/down movements as well, and perhaps even smile, which tends to look creepy. You have to trust me, I do NOT smile and scare small children with it).
First Faceware:
Now Faceshift:
|
|
By Kelleytoons - 8 Years Ago
|
I'll also note -- all the Faceshift stuff is done with wi-fi. I can't get the USB connection to work (I thought I had it working at one point, but now I'm not so sure). RL "says" it will be smoother with USB but I kind of doubt this -- it appears the data is flowing VERY smoothly (and I'm practically sitting on top of my 5gz wi-fi router as I do this stuff). Plus RL themselves demo it with wi-fi. If I could figure out how to get USB working it would be interesting to compare, but as anyone who has ever tried interfacing Apple with Windows knows, that's hardly an exact science (I was actually one of the first folks to ever get an iPod working with Windows -- Apple was so impressed I became a beta tester for them, and still am like a level 8 on their forums despite having done *nothing* more than iPod help for those years).
With wi-fi, however, it's almost so easy there is nothing to it (and another note: the RL docs are wrong in this regard. They imply you need to set up a hotspot for this but you do not and in point of fact it won't give you any advantage over just connecting normally. I first notice this in the Faceshift demo where the guy blew by this). As long as your phone sees your network iClone and Live will see the phone (assuming the PC is on the same network). That's about all there is to it.
The implication is that you might also be able to record data remotely (over the internet) and while that's intriguing (particularly for larger studios who might want to remotely capture talent) it really won't do me any good at all so I don't intend to even try. I suspect that's why you would need to setup the hotspot but even then I may be wrong (at my age this stuff makes my head hurt -- my wife was the network expert in our family, and she has since retired. I'm retired too, from programming, but at least I keep programming and she can't even hook up a network printer anymore :>).
|
|
By justaviking - 8 Years Ago
|
Re "First Faceware" at 0:20
You: "He he he he..." Me: LOL!!!
Loving the comparisons. Thanks. Keep them coming.
|
|
By Kelleytoons - 8 Years Ago
|
Okay, another test. I played around a little more with settings, although with Peter's confirmation that the iPhone tutorial they posted used the CC3 character (which is definitely superior) I'm not sure how much more playing and tailoring I'm going to try, as eventually (sigh -- if I live that long) I'll be using those characters exclusively.
In any case, this is mostly important because I connected via USB. It was a PITA and I'm not even sure I can do it again (Apple and Windows just don't like each other :>) but for the purposes of these tests I'm not seeing any differences. If you guys do, let me know, but as far as I'm concerned I think wi-fi works just as well (and it's easier and more reliable). I'm guessing that IF you have spotty wi-fi or are trying to connect remotely there might be an advantage, but given that the amount of data being transmitted is *very* small (it's only sending differences) I can't see how a direct connection could be any better. However, I will be convinced if someone sees a difference.
If I can get my wife in the right mood (she's been at work all day and while she's pretty accommodating she also gets tired) I'll have her do a couple of tests as well, just so you can see what it looks like from a feminine standpoint. Plus it might be easier for me to tailor the model. I do know that my results trying with Faceware and her were very disappointing (for some reason she didn't capture at all well -- I don't even have any theories as to why that's so):
|
|
By Kelleytoons - 8 Years Ago
|
Okay, a couple of tests with my cooperating wife (who still complained "I don't have a very expressive FACE" says the former actress. Sigh. You wonder why we directors have gray hair).
First a rather generic model, that I adjusted just *slightly* from the male adjustments I was using (mostly to close the lips):
Then a much better model (better in that it actually looks like my wife -- I think that really does help with facial capture). So my Irish wife doing her silly elf impersonation.
While these are okay, I do think the CC3 heads will be better -- in particular the lips for both aren't as detailed as I think they could be (and I'm hoping the CC3 ones are better). Lips really mean a lot more when it comes to female characters.
Edit: Shoot -- forgot to record audio for the elf. Damn. We'll do ONE more (because she's so damn cute as an elf).
|
|
By Jfrog - 8 Years Ago
|
|
Thanks for taking the time to make those tests and sharing them Kellytoons. It's great to see real life comparisons between the two systems.
|
|
By sonic7 - 8 Years Ago
|
Great to see these tests Mike! (just discovered this thread). I like the 'overall' look of *faceshift* - it's seems more 'natural' to me, though it 'misses' (I feel), in phoneme *accuracy*. (And I see what you mean re: the 'smiling'). FaceWare's lips respond *more accurately* to me, but the overall 'volume' (cavity) of the mouth seems excessive - but maybe adjustable? (And the lips don't *make contact* as much?). I don't have a *reference* point (ie: actor and puppet side by side), to *really* know.
But Mike - I'm somewhat *confused* regarding the 'setup' requirements (especially RL's). As an owner of *both* systems, can you give us a breakdown of what software / hardware is required to get into each of these setups (with a price comparison)? I think that would be a most helpful and *interesting* to see. (If you have the time for such, it would be appreciated). thnx in advance ..... :)
|
|
By Rampa - 8 Years Ago
|
|
I'm loving seeing these! So glad you are sharing them. Both systems seem pretty good.
|
|
By Kelleytoons - 8 Years Ago
|
|
sonic7 (8/15/2018)
Great to see these tests Mike! (Sorry, I only just discovered this thread). I like the 'overall' look of *faceshift* - it's seems more 'natural' to me, though it 'misses' (I feel), in phoneme *accuracy*. (And I see what you mean re: the 'smiling'). FaceWare's lips respond *more accurately* to me, but the overall 'volume' (cavity) of the mouth seems excessive - but maybe adjustable? (And the lips don't *make contact* as much?). I don't have a *reference* point (ie: actor and puppet side by side), to *really* know.
But Mike - I'm somewhat *confused* regarding the 'setup' requirements (especially RL's). As an owner of *both* systems, can you give us a breakdown of what software / hardware is required to get into each of these setups (with a price comparison)? I think that would be a most helpful and *interesting* to see. (If you have the time for such, it would be appreciated). thnx in advance ..... :)
I think there are enough parameters to adjust for the facial capture that you can get close to exactly what you want from almost either system, but I do think that Faceshift has an edge on the more "subtle" movements. I'm not going to play around more with adjustments because, as I said, I am almost certainly going to use CC3 exclusively for my content when it's released, and I think adjustments made now for the CC1 model won't necessarily be correct then.
This is pricey stuff, though, so for anyone thinking about facial mocap you need to be fairly committed (or perhaps very lucky in what you already have -- more in a moment). I absolutely cannot do without facial mocap, as I can't live long enough to do what I can do with it.
If you had *nothing* in the way of hardware, the two systems are about a wash pricewise. Faceware can work with any webcam (it can even work with video -- I have a tutorial showing how to do this). But the software itself is expensive -- I *think* I got it at an introductory price of $900 but don't hold me to that. You can look it up here -- all you do is buy the plugin direct from RL (which I see right now is $990) and Faceware LIVE (which right now is $100 -- you don't HAVE to have Faceware Live, but you might as well buy it since it becomes the basis for doing a lot of real time stuff and is dirt cheap at the moment). Then you need either a webcam or, as I said, you can use video (I'm using a very good webcam, the Logitech 922 I do believe, and it cost around $160 or so, IIRC).
Faceshift software is a lot cheaper (again, you'll want LIVE but in this case you also need it as there isn't another way of invoking it). $249 right now (normally $400 but who knows -- that reduced price might always be there for us users). However, you also need the iPhone X. No other phone will do and while they *talk* about supporting, say, Android phones they also say the phone itself has to have a 3D and infrared camera, and it's unlikely we'll see that soon (ever?) in any other phone.
I happened to already have the iPhone X so this was a no-brainer, but they are $$$ phones (I bought mine outright because I hate cell plans -- I suspect you could subsidize it, though. Don't really know or care). I *think* I spent close to 1K but that was a year ago so I can't be sure. Looks like it does cost 1K nowadays, from a quick glance at their site. It is BY FAR the best phone I have ever owned (and I've owned every single iPhone they ever made. Well, almost -- I skipped some of the "C" versions). I am SO happy with it I can't tell you, but that's completely aside from its use as a facial mocap device. If anyone was even slightly on the fence about a new phone (and they weren't married to Android) I couldn't recommend this phone more highly.
But I think you can see that whatever you do you're talking about around $1400 for facial mocap from either of these systems. That's cheaper than good body mocap, but far more expensive than most plugins. Again, for me it was indispensable (but not having the two systems -- if the price of the software had been much higher I think I would have skipped Faceshift, but I'm really glad I did not, as I suspect it will be my go-to facial mocap for all but those times I need to process video from my remote talent).
|
|
By sonic7 - 8 Years Ago
|
Thanks for the clarification and price summary Mike - yeah it's an expensive exercise for the hobbyist - I'd love to have access to such facial animation software, but I guess I'll have to wait for a cheaper alternative and do things manually in the meantime ..... It's great to see what's possible though ..... (I really quite like the 1st one your wife did - again faceshift - I thought she had some nice *moments* in that take) .... :)
|
|
By argus1000 - 8 Years Ago
|
Nice to see all the details about Live Face and Faceware. I own Faceware. I do facial animation at a distance. My actors all live in other cities. All they need is a cheap webcam, then send me the videos by Internet. With Live Face (Faceshift), I'd personally have to have an iPhone X (I don't even use cell phones at all) and my actors would have to have an iPhone X too ($1000), I believe. That would limit my potentialities severely.
So it's Faceware for me, no question..
|
|
By Rockoloco666 - 8 Years Ago
|
The iphone has usb?! Nah, you probably need 30 adapters to connect it :)
thanks for sharing kt
|
|
By Kelleytoons - 8 Years Ago
|
|
argus1000 (8/15/2018) Nice to see all the details about Live Face and Faceware. I own Faceware. I do facial animation at a distance. My actors all live in other cities. All they need is a cheap webcam, then send me the videos by Internet. With Live Face (Faceshift), I'd personally have to have an iPhone X (I don't even use cell phones at all) and my actors would have to have an iPhone X too ($1000), I believe. That would limit my potentialities severely. So it's Faceware for me, no question..
Actually, if you could just get your actors to someone who has an iPhone X you could capture remotely LIVE at a distance. I think. I haven't quite wrapped my head around the long-distance internet thing, although just for fun I might try an experiment to see sometime. So that might open up your potentialities greatly (I'm pretty sure you could find an iPhone X somewhere in every city you looked :>).
And (this again makes my head hurt) I *think* you could even do this for three or four folks at the same time. So imagine you have a group of actors all responding to each other (I'm not sure how they would all hear each other, except perhaps on another phone? Again, this makes my old brain hurt) in Real Time.
It's just interesting that we live in times like these.
|
|
By Kelleytoons - 8 Years Ago
|
|
raxel_67 (8/15/2018) The iphone has usb?! Nah, you probably need 30 adapters to connect it :) thanks for sharing kt
LOL -- you're welcome. Yeah, Apple has switched all to the new "C" connector thingee, but it does connect via USB even if you have to kind of jump through hoops to do it (on my machine at least I found I have to turn off wi-fi, and on the phone I have to set up the hot spot and disable wi-fi as well).
|
|
By Jfrog - 8 Years Ago
|
|
So imagine you have a group of actors all responding to each other (I'm not sure how they would all hear each other, except perhaps on another phone? Again, this makes my old brain hurt) in Real Time.
This is a good concept but I wouldn't count on it unless all players (Actors and directors) have a reliable and fast internet connection such as fiber optic and they don't live far apart. . I do many ADR (voice) remote recording with studios around the world and it works well but getting great quality audio and picture at the same time required a lot of internet bandwith. One other thing to consider is the delay involved between each location. When recording a session where the director is in london and the actor is here in Montreal, Canada, for example, the delay can be as much as a whole second because of the physical distance between the connections. This is physics , you can't change it. So I don't believe the actors could really interact without any serious delays unless they live close enough and they have top quality fiber optic connection.
|
|
By Kelleytoons - 8 Years Ago
|
I hear what you are saying, but I'm not so sure it would be that big a problem -- for one thing, Faceshift sends *very* little data over (just the difference data and it's trivial). Audio would be the big issue, but I've done remote audio setups with no problems at all.
But, as I said, it's not anything I'm going to do or even explore (it's more a thought experiment).
|
|
By jason.delatorre - 8 Years Ago
|
|
Not to interject in your conversation here (btw, thank you for the great demonstration, Mike), but I wouldn't be overly concerned with audio, especially since I would expect you'd want to polish in something like premiere where the audio track could be adjusted to fit the visime. Now, of course, that won't really help you for live events but I wonder if there could be some delay set up to allow for it to be closely matched.
|
|
By Kelleytoons - 8 Years Ago
|
I've actually taken the audio recorded direct in iClone and brought it into my audio programs (much higher end than Premiere, and even using such tools as Melodyne) and gotten good results. The audio records pretty well in iClone, as long as you use a decent microphone.
Now, you could easily also record at the same time and then audio match (Premiere has that tool built-in, and there are also plugins that can do that) but I like the ease of being able to get the .WAV file "in-house" so to speak. None of the examples here had any audio processing, of course, and they sound pretty bad out of the box.
|
|
By cheyennerobinson_45 - 8 Years Ago
|
KelleyToons,
I apologize if this information is somewhere in the thread but can you show a recording in which you have facial motion capture and also connected the audio for lip syncing? I am on the fence about purchasing this and I have been reading you posts which helps since as you stated Reallusion did not provide the ability to truly test the plugin. Currently i am trying to see if I should purchase this or continue to use Face Cap. Face Cap is a iphone x app that is $10 and you can capture your facial animation and then send it to yourself as an attachment. You then can connect them re-target the blendshapes to your character in Maya. All my characters are created from Daz and imported into iclone strictly for animation and exported out to Maya or blender as iClone's renderer is really bad.
|
|
By Kelleytoons - 8 Years Ago
|
|
cheyennerobinson_45 (9/11/2018) KelleyToons,
I apologize if this information is somewhere in the thread but can you show a recording in which you have facial motion capture and also connected the audio for lip syncing? I am on the fence about purchasing this and I have been reading you posts which helps since as you stated Reallusion did not provide the ability to truly test the plugin. Currently i am trying to see if I should purchase this or continue to use Face Cap. Face Cap is a iphone x app that is $10 and you can capture your facial animation and then send it to yourself as an attachment. You then can connect them re-target the blendshapes to your character in Maya. All my characters are created from Daz and imported into iclone strictly for animation and exported out to Maya or blender as iClone's renderer is really bad.
I'm a little confused by what you are asking. The way Faceshift OR Faceware for iClone (they don't call the iPhone X plugin Faceshift but I find it's easier to say to differentiate it from Faceware) both work for lipsync is they capture the muscles of the face and those movements of the lips sync up with the audio. At the same time you *can* capture the audio for the auto phonome generation. These phonemes aren't used because they aren't nearly as accurate or as good as the facial mocap. What IS used are the phonemes for tongue movement (which isn't captured by either camera system -- my wife was really funny because she kept sticking her tongue out and looking at the avatar and saying "why isn't my tongue showing?" You had to be there).
So I'm not really sure what I'd show you in a video -- all my videos in this thread (starting at the beginning) have the audio captured for the tongue purpose. I'm not really so sure it's that helpful, but perhaps it is at times. As I said, iClone doesn't really do a good job at transferring audio into phonemes, not nearly as good as Papagayo (but PG also uses text to match up to it -- I was hoping the Python implementation in iClone would allow us to do things with this, but apparently it won't be very useful at all).
|
|
By cheyennerobinson_45 - 8 Years Ago
|
Sorry for any confusion as i appreciate you taking the time to reply. I have read that the Faceshift plugin records the audio as you stated, but I have seen where you can go in and edit the Visemes (may have mispelled that) in order to create a more accurate representation. It briefly shows this in one of the Reallusion demos or tutorials. Here is the video I saw the audio track. You can briefly see it under the Viseme - Voice - Lips. You can FFW to 2:10 and pause the video. https://www.youtube.com/watch?v=89dW2LFR07I I wondered if you are able to go in and fix any Visemes being used that are not correct. As you may know face capture and lip sync are 2 different things. I would like to know if I have the ability to modify lip sync if needed with the FaceShift plugin.
I really hope this clarifies things but if not I do appreciate the info. :)
|
|
By Kelleytoons - 8 Years Ago
|
I may not be explaining this properly -- let me try it one more time.
The audio has NOTHING to do with lip sync. Lip sync is wholly generated by the recording of the movements of the lips and areas around the mouth. These muscle movements are what is captured and what produces the sync. You actually don't need to edit this at all -- Faceshift is VERY accurate and there is no need to try and improve it (you could not).
The audio that is recorded and the visemes generated are ONLY used for the tongue. Nothing more. I hope this explains it (you *could* edit those for the tongue movements, but I haven't found the tongue movements to be all that bad, nor really noticeable much).
|