Profile Picture

The cheapest way to speed up iRay renders.

Posted By jarretttowe 6 Years Ago
Rated 5 stars based on 1 vote.
Author
Message
jarretttowe
jarretttowe
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)

Group: Forum Members
Last Active: 2 Years Ago
Posts: 560, Visits: 3.0K
As new people come to the forum, I think this bears repeating. There is a way to speed up your renders considerably, and do it cheaply.
1. Render at half size as lossless png files.
2. Buy Gigapixel AI for 99$.
3. Double size the renders in the Gigapixel batch function.
I have used this program extensively, and for animations, it is great. Of course resolution is not identical to a native render, but it comes in at a 1/4 of the render time.
Also, I have found that simply doubling my model textures works extremely well, too.
3d guy1
3d guy1
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (2.0K reputation)Distinguished Member (2.0K reputation)Distinguished Member (2.0K reputation)Distinguished Member (2.0K reputation)Distinguished Member (2.0K reputation)Distinguished Member (2.0K reputation)Distinguished Member (2.0K reputation)Distinguished Member (2.0K reputation)Distinguished Member (2.0K reputation)

Group: Forum Members
Last Active: 4 Years Ago
Posts: 190, Visits: 1.6K
Hi,

Im confused.  How does this do video? 
justaviking
justaviking
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)

Group: Forum Members
Last Active: Last Week
Posts: 8.2K, Visits: 26.5K
A lot of people render to a sequence of PNG files, and then use a video editing application (NLE = Non-linear editor) to make a video from the PNG files.  That gives you a lot of control over many aspects, including more "video conversion/compression" options.

So you'd render a "half-size" set of PNG files, and use that magical tool up-size them to full resolution, and then put the full-sized images into your NLE.




iClone 7... Character Creator... Substance Designer/Painter... Blender... Audacity...
Desktop (homebuilt) - Windows 10, Ryzen 9 3900x CPU, GTX 1080 GPU (8GB), 32GB RAM, Asus X570 Pro motherboard, 2TB SSD, terabytes of disk space, dual  monitors.
Laptop - Windows 10, MSI GS63VR STEALTH-252, 16GB RAM, GTX 1060 (6GB), 256GB SSD and 1TB HDD

animagic
animagic
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)

Group: Forum Members
Last Active: 2 hours ago
Posts: 15.7K, Visits: 30.5K
I'm reading that processing an image may take quite some time so it will be interesting to compare rendering at a higher resolution with rendering at a lower resolution and use Gigapixel AI. 

The approach is interesting though. I have tried other applications that claim the same thing, and I have never been impressed.


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

jarretttowe
jarretttowe
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)Distinguished Member (4.5K reputation)

Group: Forum Members
Last Active: 2 Years Ago
Posts: 560, Visits: 3.0K
Free demo. Easy to test!
animagic
animagic
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)

Group: Forum Members
Last Active: 2 hours ago
Posts: 15.7K, Visits: 30.5K
jarretttowe (12/7/2018)
Free demo. Easy to test!

I did download the demo, and I am impressed by the quality. It is also quite fast, so it would beat rendering at a large size. Thanks for the tip!


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

illusionLAB
illusionLAB
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (3.9K reputation)Distinguished Member (3.9K reputation)Distinguished Member (3.9K reputation)Distinguished Member (3.9K reputation)Distinguished Member (3.9K reputation)Distinguished Member (3.9K reputation)Distinguished Member (3.9K reputation)Distinguished Member (3.9K reputation)Distinguished Member (3.9K reputation)

Group: Forum Members
Last Active: 2 Years Ago
Posts: 393, Visits: 4.8K
If the ultimate goal is to do 'low sample' Iray renders and then upscale with this plugin to get HD video it's worth pointing out that any sort of "smart" process... like Ai Denoising, approaches every frame as new frame - that is, it's unaware of what it did to achieve the 'denoise' or 'upscale' for the previous frame.  When you process a frame sequence with a series of smart filters you also usually create an entirely new fault which only becomes apparent when played as a video.  Smart filters like "NeatVideo" (best denoiser!) have a comprehensive "temporal" section to make the plugin aware of the previous and next frame to minimize the variance you get from processing single frames.  The first 'video' render I did with Blender's cycles and it's denoiser looked great, but had a 'fixed pattern' noise that made the render look like you're watching it through a screen door.  Fortunately, they do have a setting to 'randomize' the noise pattern (side effect of the denoiser... go figure!) making it look more like film grain or video noise... which is less annoying than fixed pattern and also something that NeatVideo can deal with quite well.
animagic
animagic
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)

Group: Forum Members
Last Active: 2 hours ago
Posts: 15.7K, Visits: 30.5K
I have NeatVideo also, so I will see how that works in comparison. Thanks for pointing that out.


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

animagic
animagic
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)

Group: Forum Members
Last Active: 2 hours ago
Posts: 15.7K, Visits: 30.5K
I found that PNG images in Gigapixel are saved as 48-bit PNG, which makes them huge. I did not find a way to save as 24-bit (8-bit per color). I can convert afterwards, but it is an extra step.


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

justaviking
justaviking
Posted 6 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)Distinguished Member (20.4K reputation)

Group: Forum Members
Last Active: Last Week
Posts: 8.2K, Visits: 26.5K
illusionLAB (12/8/2018)
...approaches every frame as new frame - that is, it's unaware of what it did to achieve the 'denoise' or 'upscale' for the previous frame.  When you process a frame sequence with a series of smart filters you also usually create an entirely new fault which only becomes apparent when played as a video.


I have shared the same observation here a few times over the past few months.

Several people have assumed they'd be able to render with fewer Iray "iterations" when they make a video.  I strongly disagree, for the reason described in the quote above.  Noise and grain that might not be distracting in a still image will catch your eye when it changes (flickers) from frame-to-frame.  That became clear to me way back when the Indigo plug-in became available.

So if you are doing 60-second renders in CC3, you might want to prepare for 90-second renders (per frame) in iClone.






iClone 7... Character Creator... Substance Designer/Painter... Blender... Audacity...
Desktop (homebuilt) - Windows 10, Ryzen 9 3900x CPU, GTX 1080 GPU (8GB), 32GB RAM, Asus X570 Pro motherboard, 2TB SSD, terabytes of disk space, dual  monitors.
Laptop - Windows 10, MSI GS63VR STEALTH-252, 16GB RAM, GTX 1060 (6GB), 256GB SSD and 1TB HDD




Reading This Topic