Profile Picture

For those using voice actors, or re-dubbing voices .Hope this helps you saving time.

Posted By rgreenidge Last Year
Rated 5 stars based on 1 vote.

For those using voice actors, or re-dubbing voices .Hope this helps...

Author
Message
rgreenidge
rgreenidge
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (2.1K reputation)Distinguished Member (2.1K reputation)Distinguished Member (2.1K reputation)Distinguished Member (2.1K reputation)Distinguished Member (2.1K reputation)Distinguished Member (2.1K reputation)Distinguished Member (2.1K reputation)Distinguished Member (2.1K reputation)Distinguished Member (2.1K reputation)

Group: Forum Members
Last Active: 2 days ago
Posts: 253, Visits: 1.6K
I've already recorder 5 other voices, and have 13 more coming. What I learned this weekend changed my thinking and I will use this method from now on. I save my recorded voices, by name of the vocalist, then scene chapters of the movie they are used in and the order. The problem I had is that I spoke more slower than most voices, and when I would add the new voice to the audio track of the completed movie, their voices would stop before the lips stopped moving on the character. I first tried cutting each word and spreading it out on the movie editor timeline. That doesn't work all the time because some of the words are too close to split. Next I would try stretching the sentence using audio stretching, without changing the pitch. It would fit great, but the voices sounded with a echo and not as natural as the new vocal recording. Now what? Instead of stretching the audio, I now split the areas of the video where the persons are talking, and adjust that selection of the video to fit to the length of new audio clip, mute the old audio track for which I used to make the mouth move, there would now be a gap because of the shortened video section, so I then select all the rest of the movie and slide it over until it fills the gap in the time line and it works great. No more character mouth movement after the new audio overdub stops. Most of these scenes, my character is standing still and talking and very little else is affected by the frames going faster or slower. Yes that mouth is moving faster or slower with the newer voice but it should start and end at the same time. There were about 3 occasions where the voice was longer than mine, so I had to lengthen the videoclip . So I first selected all the videos after the video split I needed, and dragged them all back on the timeline so I would have enough space to stretch the video section so it would be the same size as the longer new audio wave file, and then slide the rest of the movie back together. Bottom line, stretch or shrink the video length of when the person is talking, to match the new audio voice dubbing, do not try to stretch or shrink audio wave file length to match that of the video selection, it can sound horrible. All of this is done on a movie editor, outside of iClone after you finish your movie or all your clips separately. I was surprised I was only able to knock off 2 minutes of the movie with the faster voices. So with 13 more voices, 7 minutes total? Good luck out there iCloning.            

Home built; ASRock X570 Pro 4, AMD Ryzen 9-390X CPU, ASUS GTX-1080 TI, 11GB OC video card, 131GB of RAM.

animagic
animagic
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (21.9K reputation)Distinguished Member (21.9K reputation)Distinguished Member (21.9K reputation)Distinguished Member (21.9K reputation)Distinguished Member (21.9K reputation)Distinguished Member (21.9K reputation)Distinguished Member (21.9K reputation)Distinguished Member (21.9K reputation)Distinguished Member (21.9K reputation)

Group: Forum Members
Last Active: 4 hours ago
Posts: 12.3K, Visits: 21.5K
So if I understand you correctly, you do the initial lip-syncing animation with your own voice and then match up the voices of the actors by editing?


https://forum.reallusion.com/uploads/images/1a09220f-ab50-42ac-ad1a-33ec.pnghttps://forum.reallusion.com/Uploads/Images/d14339d0-cd32-4b35-88f9-40a0.png


justaviking
justaviking
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)

Group: Forum Members
Last Active: 1 hour ago
Posts: 7.9K, Visits: 25.2K
I often start animating using my own voice.  Even if I'm going to do one of the voices, I record the dialogue directly in iClone as a TEMPORARY vocal track, since I will do a better recording later in a quieter location.

Because the final vocal recording may be faster or slower, I try to leave some room between lines of dialogue.  Or if I need to, I can always insert frames into the project, but I need to be sure my lines don't bump into each other.  This process allows me to start animating while still waiting for the final voice recordings.

When I get my final voice recordings, I REPLACE the temporary vocal track with them.

With the final vocals in place, I adjust any animation to go with the timing of the new recordings, if needed.  And then I trim out the excess frames between lines of dialogue.  That is where I have to be more aggressive and tighten up my "editing" more.  I often leave too much dead space in the scene, but I don't really notice it until I watch the video again a month after I finished the project.

I use the official vocal track to create new visemes, and edit them as-needed.  It always needs some viseme cleanup.

Lastly, I ignore all the iClone audio (even if I export to an MP4 format) and replace it all in my NLE.  That way I can adjust the volume for each character, and can also vary a character's volume over time, rather than having only one volume control for the entire scene.




iClone 7... Character Creator... Substance Designer/Painter... Blender... Audacity...
Desktop (homebuilt) - Windows 7, i7-3770k CPU, GTX 1080 GPU (8GB), 16GB RAM, Asus P8Z77-V Pro motherboard, 500 GB SSD, terabytes of disk space, dual  monitors.
Laptop - Windows 10, MSI GS63VR STEALTH-252, 16GB RAM, GTX 1060 (6GB), 256GB SSD and 1TB HDD

Edited
Last Year by justaviking
Kelleytoons
Kelleytoons
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)

Group: Forum Members
Last Active: 2 hours ago
Posts: 7.2K, Visits: 15.1K
Those of us using facial mocap are most likely going to use the vocal track we record at the same time but not always.

I actually lip sync to a lot of my pre-recorded tracks, but I do it at half speed (much easier).  I also have the dialog onscreen at the same time so it's easy to do.  Then I just speed it up before render by moving the track backwards.  This seems to work very, very well (no viseme adjustments needed, nor any editing as the OP is doing).  But, again, this is using facial mocap which is a whole different animal.



Alienware Aurora R7, Win 10, i7-8700k, 4.7GHz CPU, 32GB RAM, GTX Titan XP (12GB), Samsung 960 Pro 2TB M-2 SSD, TB+ Disk space
Mike "ex-genius" Kelley
justaviking
justaviking
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)

Group: Forum Members
Last Active: 1 hour ago
Posts: 7.9K, Visits: 25.2K
@Mike (or anyone else),

Do you have any experience with Automated Dialog Replacement (ADR)?  If so, any good tips for us?






iClone 7... Character Creator... Substance Designer/Painter... Blender... Audacity...
Desktop (homebuilt) - Windows 7, i7-3770k CPU, GTX 1080 GPU (8GB), 16GB RAM, Asus P8Z77-V Pro motherboard, 500 GB SSD, terabytes of disk space, dual  monitors.
Laptop - Windows 10, MSI GS63VR STEALTH-252, 16GB RAM, GTX 1060 (6GB), 256GB SSD and 1TB HDD

thedirector1974
thedirector1974
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (5.0K reputation)Distinguished Member (5.0K reputation)Distinguished Member (5.0K reputation)Distinguished Member (5.0K reputation)Distinguished Member (5.0K reputation)Distinguished Member (5.0K reputation)Distinguished Member (5.0K reputation)Distinguished Member (5.0K reputation)Distinguished Member (5.0K reputation)

Group: Forum Members
Last Active: 9 hours ago
Posts: 741, Visits: 4.5K
Is there any reason for this ... sorry ... stupid workflow???
Just record your voice actors, sort the audio and animate with the actual voices! Why in the hell would you do this otherwise? There is no reason.
justaviking
justaviking
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)

Group: Forum Members
Last Active: 1 hour ago
Posts: 7.9K, Visits: 25.2K
@Director - In my case, I don't always have access to my voice actor(s) when starting a project.  By using temp dialogue, I can get 90% of the animation done ahead of time.  Then when our schedules align, and I record the actual dialogue, it's easy to replace my voice with theirs.  That's the value to my approach, which I may have made sound more complex than it is.  (It was really weird the first time I had my voice coming from the female characters, but I got used to it.)  Regarding all the "stretching" and other stuff rgreenidge was talking about in the original post, I'm not sure I quite followed all that too well.


P.S.
About the ADR, I've been tempted several times to "purposely" record some live acting with noisy audio, such as on a windy day outside, and try some at-home ADR just to see what it's like.  Stuff like that is part of the fun I have with this hobby.  Experimenting and learning.




iClone 7... Character Creator... Substance Designer/Painter... Blender... Audacity...
Desktop (homebuilt) - Windows 7, i7-3770k CPU, GTX 1080 GPU (8GB), 16GB RAM, Asus P8Z77-V Pro motherboard, 500 GB SSD, terabytes of disk space, dual  monitors.
Laptop - Windows 10, MSI GS63VR STEALTH-252, 16GB RAM, GTX 1060 (6GB), 256GB SSD and 1TB HDD

Kelleytoons
Kelleytoons
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)Distinguished Member (23.2K reputation)

Group: Forum Members
Last Active: 2 hours ago
Posts: 7.2K, Visits: 15.1K
Ani,

ADR is used all the time with live action, but it's almost always (unless you're in Italy) just a way of overcoming poor audio capture in the field.  No real reason to do it this way for animation (as The Director suggests, somewhat less charitably :>Wink.  Assuming you capture good audio in the first place (something we animators have the luxury of doing almost every single time) I can't really see any reason to do it.

(And, yes, we did it all the time with live action -- SO glad those days are over with, although with things like the ability to auto-match audio present in most video editors it would be a piece of cake to just replace the poor audio with onbody mics).



Alienware Aurora R7, Win 10, i7-8700k, 4.7GHz CPU, 32GB RAM, GTX Titan XP (12GB), Samsung 960 Pro 2TB M-2 SSD, TB+ Disk space
Mike "ex-genius" Kelley
justaviking
justaviking
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)

Group: Forum Members
Last Active: 1 hour ago
Posts: 7.9K, Visits: 25.2K
ADR... I should have made it more clear I was talking about doing a "video recording" to play with ADR.  Unrelated to iClone work.  Just "movie making."

Back to iClone and animation, if I do my voice recordings where my computer is, I might pick up my computer fans, and very likely will pick up my air conditioning vents (or heater vents, based on season).  For a while I would turn off the furnace or A/C while recoridng, but the family didn't always appreciate it.  Plus I tended to more echo than I'd like when recording there.  It's not horrible, but with headphones on, I've gotten sensitive to that background noise.

So I now do my voice-over recordings in a "recording booth" (a.k.a. my walk-in closet) where there are no air vents and the cloth reduces any echo.  I also feel more free to record a lot of takes, pick the best one (or splice them together into a best one), and also do some light noise reduction in Audacity (even though the audio is pretty good already).  That gives me very clean bit audio to work with.

If I always did my audio first, that would be cool.  But usually I want to get started on the animation before I get together with my voice actor(s) only because I'm impatient, which is what had lead to my use of temporary dialogue in iClone.



iClone 7... Character Creator... Substance Designer/Painter... Blender... Audacity...
Desktop (homebuilt) - Windows 7, i7-3770k CPU, GTX 1080 GPU (8GB), 16GB RAM, Asus P8Z77-V Pro motherboard, 500 GB SSD, terabytes of disk space, dual  monitors.
Laptop - Windows 10, MSI GS63VR STEALTH-252, 16GB RAM, GTX 1060 (6GB), 256GB SSD and 1TB HDD

mark
mark
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (9.0K reputation)Distinguished Member (9.0K reputation)Distinguished Member (9.0K reputation)Distinguished Member (9.0K reputation)Distinguished Member (9.0K reputation)Distinguished Member (9.0K reputation)Distinguished Member (9.0K reputation)Distinguished Member (9.0K reputation)Distinguished Member (9.0K reputation)

Group: Forum Members
Last Active: 10 hours ago
Posts: 4.2K, Visits: 12.3K
I would "loop" my kid actors audio in the little films I did at church.
We would usually get lousy audio on location..planes, trains and automobiles!!! Back in the classroom/studio I would play the location video on a monitor and they would listen to their audio on headphones and read they lines in sync, as good as possible, as I recorded the new audio.
In post, of course, mix in some "room tone"  and try to match the location ambience as best as I could...which was never really very good,,,BUT you could hear the lines and that's what was important!TongueBigGrinSmile  I some situations it actually worked pretty good!!Tongue


PROMO-BACK2
Click here to go to my YouTube Channel filled with iClone Tutes and Silly Stuff

Visit ANIMATED PROJECTIONS Powered by iCLONE

Intel Core i7 3960X @ 3300MHz Overclocked to 4999.7 MHz Sandy Bridge 24.0GB DDR3 @ 833MHz Graphic Display HP ZR30w 
GeForce GTX 980Ti 6GB  Microsoft Windows 7 Professional 64-bit SP1 ASUSTeK COMPUTER INC. P9X79 WS (LGA2011)






Reading This Topic