|
Group: Forum Members
Last Active: Last Year
Posts: 3.4K,
Visits: 12.4K
|
Please pass on to parties involved in development of iClone Iray - is THIS possible:
BACKGROUND Having used Daz Iray and 3ds Max Iray (with denoiser), iClone users are concerned with viability of rendering animations (multiple frames) using Iray, due to inability to render frames as fast as iClone's own native PBR real-time renderer.. So, when we will render a frame in Iray, even optimally, the denoiser will be starting from scratch, frame to frame, with each frame, in order to collect the information for the denoise. As background, in such case, let's imagine I can render a batch of denoised images @1080p, for say, 6 seconds per frame. In this case a 900-frame render winds up taking 5400 seconds, or 90 minutes, an hour and a half (1.5 hrs).
IDEA What if, the iClone could run an initial 'Scan Pass' of some sort, where it could scan all frames in a project's timeline, to gauge AI-wise where the renders wind up being less dynamic, and more static (ie where renders are in fact duplicates, or mere duplicates, and 'stay the same') Then, it could render frame by frame, but not be pressured to start from scratch in all frame cases, because it would have a que of info to cross-check against, (either in a buffer of active info, or off of a text-file, say), in which case it could spit out the almost 'identically-same' renders much quicker, much like if it was a single mat HDRI, or single JPG background being rendered frame after frame.
Now, it would need to collect the info first, and the initial Scan-Pass would take time to collects said info, and this would occur before render. (almost akin to what iClone's Indigo plugin did, but in this case only as a scan to collect info) So, if iClone could run such a Scan-Pass to collect the info first, it would then now have an optimized 'render plan' for the whole project.
HOPE instead of having to render and denoise on an image-by-image basis, iClone could do a scan at beginning (ie 15 minutes scan of all frames), que the info, and then tell Iray of the info that it collected, such that it could run the renders with the AI of knowledge already captured. Let's imagine, for example, with this faster rendering, it would only have to take 45 minutes, say, as several frames would be already known by Iray as duplicates, or near duplicates, in which case frames render in 1-2 seconds vs 6 seconds. I am imagining such a Scan Pass could cut down the render timer for a sequence of frames cumulatively, even with the time it would take to run a scan at the beginning. Thus, iClone users could more optimally render animations using the Iray renderer and denoiser. It wouldn't make denoiser or renderer itself faster, it just could give 'the iClone project' a curriculum to work off of, AI wise. And then, like my example imagines, the project could take an hour (with a pre-info pass of collected info for 15 minutes + optimized AI render time of 45 minutes), vs and hour and a half (no info collected before hand, just independent frame-by-frame.
Any thoughts?
|
|
Group: Forum Members
Last Active: 3 Years Ago
Posts: 393,
Visits: 4.8K
|
Seems logical... but RL's contribution to the rendering will most likely be strictly an 'export & convert' function - all cleverness, including Ai denoising, will be an iRay task. In fact, temporal noise reduction is 100% necessary for animated sequences anyway but usually maxes out at 1 or 2 frames ahead - it noticeably slows down the render process as it needs to 'pre-calculate' those frames - so imagine trying to pre-calculate 900 frames.
|
|
Group: Forum Members
Last Active: Last Year
Posts: 3.4K,
Visits: 12.4K
|
illusionLAB (6/19/2018) Seems logical... but RL's contribution to the rendering will most likely be strictly an 'export & convert' function - all cleverness, including Ai denoising, will be an iRay task. In fact, temporal noise reduction is 100% necessary for animated sequences anyway but usually maxes out at 1 or 2 frames ahead - it noticeably slows down the render process as it needs to 'pre-calculate' those frames - so imagine trying to pre-calculate 900 frames.IL, where are you getting that any renderer is currently looking ahead 1-2 frames in advance? If it was, it would have to be shown that it has worth by the forcasted scenes then actually rendering much faster than the previous. Have you ever seen this, or where did you read this, or what renderer actually is using this?
|
|
Group: Forum Members
Last Active: 3 Years Ago
Posts: 393,
Visits: 4.8K
|
Not "look ahead" rendering... but noise reduction. Don't fall for the "Ai" badge... the noise reduction is a 'post process' - I'm sure you've seen how Octane 4's noise reduction happens once the frame is complete. In order for the noise reduction to be "intelligent" in a rendering scenario it will use the previously rendered frames for the 'temporal' calculation - which helps avoid the inevitable flickering when the algorithm is unaware of what it did in the previous frame. In compositing software, the temporal calculations can use either previous or next frame (as they already exist) - clever algorithms use both the frame before and after for highest quality (but at higher stress on your system). Grey Scale Gorilla did an informative "shootout" using both 3D and 2D denoisers (well, they're all 2D... it's just because they are part of the 3D render process that they're referred to as 3D denoisers). Worth the watch, especially as the iRay denoiser is in the competition. https://www.youtube.com/watch?v=LRsdYIeXhlQ
|
|
Group: Forum Members
Last Active: Last Year
Posts: 3.4K,
Visits: 12.4K
|
Thanks for this IL, I see what you are saying. The video is great too! Poor Optix...!
I think what you say about falling for the Ai badge is where I am going for in this idea. Since, the renderer has no way of possibly seeing into the future, it can't truly Ai the next render full-mega, it only learns what it did in the last frame, as you explain. But what if it would do like a human could do, if you looked at the whole layout before moving through an assembly line project, you would see where the opportunities for optimization were even before you started. So, I guess what I am proposing is a process where the Ai works in combo with a render plan. Not only a denoising enhancer, but a render speed enhancer. But you think the 'scan' would be too intensive as far as processing time, to the point where it doesn't time-wise work?
|
|
Group: Forum Members
Last Active: 3 Years Ago
Posts: 393,
Visits: 4.8K
|
It's totally possible that once the geometry is loaded that the renderer could "look ahead" but it wouldn't necessarily speed up the renders. If your scene is a white box in a black void at the beginning of the sequence and the camera but over time pans to a complex space ship engine room with reflections, radiosity, DOF etc. it wouldn't give the renderer any advantages (it would probably just make it anxious to know how much work is ahead! ;-) The speed increases we are enjoying (honestly, Octane has changed my life!) are using every trick in the book to accelerate rendering - let's face it, whoever creates the "fastest and most realistic" renderer will become the "industry standard". Since Octane 4 will be free for two GPUs, it's going to take a near miracle to surpass that. As Octane 3 owners, we're going to get ALL the plugins for Octane 4 for free (Blender, C4D etc.) so let's wish for RL to join Octane revolution!
|