Profile Picture

Creating AccuLips with Python

Posted By josh_177552 4 Years Ago
You don't have permission to rate!
1
2

Author
Message
animagic
animagic
Posted 4 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)

Group: Forum Members
Last Active: Last Month
Posts: 15.8K, Visits: 31.4K
What is disappointing to me is that RL just announced that the CC 3.43 update features compatibility between CC and Omniverse Audio2Face, which supports multiple languages!

So why can't that be implemented for AccuLips?

I think we should keep hammering on this until we get more than some non-committal answer. It is as if RL wants to chase the core filmmakers away from their products...:unsure:


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

mtakerkart
mtakerkart
Posted 4 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)

Group: Forum Members
Last Active: 3 Months Ago
Posts: 3.2K, Visits: 29.2K
I'm a Director artist , really not a Python coder.... I want to make movie with fewer mouse click.
Reallusion is one of the few companies that makes this possible. What I deplore is that for the moment they do not want to put a little energy for an acculips in French while Josh_177552 shows us that it is not complicated for a coder and that all the development is already done , free, open access ....
The future of digital is in the creation of very user-friendly applications. This example, and there will be more, shows it clearly:
http://www.cgchannel.com/2021/07/the-freemocap-project-markerless-mocap-for-under-100/
The announcement of character creator 4 which will allow you to rig, with facial blendshape, any mesh in a few clicks goes in this direction

Did you see that Marc Zukerberg want to create a metaverse?
https://www.bloomberg.com/news/articles/2021-07-29/mark-zuckerberg-explains-metaverse-vision-to-facebook-fb-investors-analysts
Imagine the content that will have to be created. You will have to be the first to offer this service.
animagic
animagic
Posted 4 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)

Group: Forum Members
Last Active: Last Month
Posts: 15.8K, Visits: 31.4K
There is a Python script from Mike Kelley to take the viseme output from Papagayo and then use that in iClone. That could perhaps be the basis for what you want to do.

See: https://forum.reallusion.com/427265/Improved-Papagayo-Script.


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

animagic
animagic
Posted 4 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)

Group: Forum Members
Last Active: Last Month
Posts: 15.8K, Visits: 31.4K
Mysterious duplicate post...:unsure:


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

josh_177552
josh_177552
Posted 4 Years Ago
View Quick Profile
Senior Member

Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)

Group: Forum Members
Last Active: 4 Years Ago
Posts: 6, Visits: 56
I guess it will be a bit more work if your goal is to get the results back into iClone, but it should be possible with what commands are available in iClone's Python API.
I'm currently just working with English dialogue, but probably will have to deal with translations later.

To generate phonemes I have just followed the instructions on their website: 
Examples — Montreal Forced Aligner 2.0.0a22 documentation (montreal-forced-aligner.readthedocs.io)
- I have my dialogue organized in folders for each character
- I have for example LINE1.wav, and a corresponding LINE1.lab (this is just a text file with the dialogue written out, it's the same as the .txt file you can make for iClone).
- I run the command to generate the phonemes as shown in the example (but pointed at the directory where all my files are instead of the example dataset).

Now in the output directory, you have all these .TextGrid files with the phonemes and at what time they happen.
You would have to parse this file to extract the phonemes, and use it to place the visemes on the timeline.
In the iClone install folder there is a file "iClone 7\Resource\ICTextToSpeech\Dictionary\en.PhonemeVisemeMapping" where you can read which phonemes get turned into which visemes, for example both the "F" and "V" phonemes get turned into the "F_V" viseme.
mtakerkart
mtakerkart
Posted 4 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)Distinguished Member (16.8K reputation)

Group: Forum Members
Last Active: 3 Months Ago
Posts: 3.2K, Visits: 29.2K
I found that iClone is using a free, open source software called Montreal Forced Aligner to generate the phonemes, which is the main data I wanted to generate.
So, I have instead built my pipeline around this same software and skipped iClone entirely. Maybe I will find a use for iClone in the future


I started this thread last month about the montreal forced aligner :
https://forum.reallusion.com/Topic486876.aspx

Could you share/show what pipeline you use ? I would like to use french acculips.

Thank you
josh_177552
This post has been flagged as an answer
josh_177552
Posted 4 Years Ago
View Quick Profile
Senior Member

Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)

Group: Forum Members
Last Active: 4 Years Ago
Posts: 6, Visits: 56
I found that iClone is using a free, open source software called Montreal Forced Aligner to generate the phonemes, which is the main data I wanted to generate.
So, I have instead built my pipeline around this same software and skipped iClone entirely. Maybe I will find a use for iClone in the future.
josh_177552
josh_177552
Posted 4 Years Ago
View Quick Profile
Senior Member

Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)

Group: Forum Members
Last Active: 4 Years Ago
Posts: 6, Visits: 56
animagic (7/21/2021)
@josh: So I assume your desired output is a MotionPlus file for each line?

Currently, that would be the only way to record the facial animation, AFAIK. We used to have .iTalk files, but I think that's old hat.

That said, there may be other options that I'm not aware of...:unsure:


I am exporting a custom file format from the blend weight, bone transform, and phoneme values I am able to access on the viseme component.
I have the whole automation process fully working using the older lipsync function that only uses the audio file for input.
So the problem is not with the export, only with making the input be used in the desired way (AccuLips as opposed to the older, previously existing automatic lipsync function).
The only thing I am missing is the AccuLips feature which produces noticeably better lipsync.
animagic
animagic
Posted 4 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)Distinguished Member (33.4K reputation)

Group: Forum Members
Last Active: Last Month
Posts: 15.8K, Visits: 31.4K
@josh: So I assume your desired output is a MotionPlus file for each line?

Currently, that would be the only way to record the facial animation, AFAIK. We used to have .iTalk files, but I think that's old hat.

That said, there may be other options that I'm not aware of...:unsure:


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

josh_177552
josh_177552
Posted 4 Years Ago
View Quick Profile
Senior Member

Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)Senior Member (253 reputation)

Group: Forum Members
Last Active: 4 Years Ago
Posts: 6, Visits: 56
animagic (7/20/2021)
I  don't think that the Python API has been updated to include AccuLips.

4000 lines of dialog is an awful lot, so you would need to split that up anyway. That in itself would be more work than running AccuLips after that.

The lines are split up and I have .wav files and .txt files for each of them.
Using the user interface, even if I spent 1 minute on each line it would take 2 weeks to get through them.

1
2



Reading This Topic