|
By benhairston - Last Year
|
When rendering with EXR in iclone, does this result in an 8 bit EXR file or is the output buffer for Iclone’s native renderer higher in resolution now?
I sincerely hope it’s the latter, but I’m happy with the direction this is going…
|
|
By Kelleytoons - Last Year
|
|
Don't know the answer but could you please explain to this old man what EXR output even is?
|
|
By Warped Reality VFX - Last Year
|
|
Originally developed by visual effects powerhouse, ILM, the EXR format is designed for photorealistic rendering, compositing, and digital intermediate use cases. All the things an Academy Award-winning visual effects company would need from a file format. Some of the format’s technical aspects make this heritage obvious - Up to 40 f-stops of high dynamic range
- 32-bit floating point depth
- Lossless compression
- Alpha channel
- Multi-pass and multi-channel images
- Extensive additional metadata support
EXR files are useful when you need to store and work with the highest dynamic range, uncompressed images you can. Especially when you’re going to manipulate them a lot, such as when color grading or compositing, without introducing compression artifacts or color banding. This makes them ideal for use in animation, 3D rendering, and even professional photographic finishing, where you might be layering multiple passes together to create the final image. Common 3D rendered passes include: - Reflection pass
- Specular pass
- Shadows pass
- Diffuse color pass
- Ambient occlusion pass
- Z depth pass
- Beauty pass
- Mattes of specific elements
All of these individual passes, per frame, can be stored and accessed within each individual.EXR file. These are called multi-channel EXR files. Without this ability, you would have to export, manage, and re-combine all of those passes manually, inside of your creative host application. This makes them a preferred delivery format for colorists, compositors, and finishing artists.
https://massive.io/file-transfer/what-is-an-exr-file/
Hope this helps. Best regards. Kevin L.
|
|
By Kelleytoons - Last Year
|
Okay, the EXR file (in Photoshop, at least) is 32 bits, which also includes the alpha channel. Does that answer your original question?
I put a PNG image on top of the EXR one (in PS) and there is definitely a "difference". Here is what the PNG image looks like:

And here is the "difference" (this layer set on top of the EXR image with "difference" selected):

Now - what the heck this means I have zero idea. Looking at the EXR image I can't visually see any difference but clearly there IS some. So what I want to know is - how would this make things better in an animation? I'm guessing I can import the EXR images in to, say, Premiere, and then render out that way, but what does that buy me? (Other than the EXR images are twice the size of the PNG ones).
|
|
By Kelleytoons - Last Year
|
Oh, I should add that switching layers back and forth in PS (to see) there definitely IS a difference - the EXR image looks "brighter" but I'm guessing that's because it has more dynamic range, right?
Hmmm - I have to ponder this all out. Should we ALL be using EXR from now on?
|
|
By benhairston - Last Year
|
Hey Mike,
The TLDR version is that if you do any kind of post work an EXR file will provide greater dynamic range, prevent color banding and gives you more flexibility when doing render passes, color grading and the like…
|
|
By Kelleytoons - Last Year
|
Well, I'm really at a stage in life where I'm not going to do much post work anymore, even to color grade. I just use whatever comes out.
But I have to admit that the idea there is a greater dynamic range there means (in my mind, at least) that it will look better on screen. Unlike a still image, though, I can't very well set up a situation to compare final renders when it comes to outputting an MP4, for example. I dunno - can I get something approaching HDR using that approach? Again, at my limited life (left) I want to keep it as simple as possible. If all I have to do is to just render to EXR (versus PNG) then I'm on board.
I guess I'll have to make some tests to see if these (old) eyes can see a difference.
|
|
By benhairston - Last Year
|
|
Should all renders from everybody be done as EXR? In short, no. If you are happy with your renders, which, until now in the native renderer couldn't be EXR anyway, then PNG sequence or AVI all day long. I only speak for myself, and view the renders I get from Iclone to be the start of the post process, so the flexibility of a 32 bit render is what I want. My original question was if the Reallusion implementation of EXR uses an 8 bit frame buffer, or a higher precision, floating point buffer. If it's the former, then we don't have any more advantage in post really than rendering PNG. If it's the latter, for me at least, that's a huge gain in post. Is it a deal breaker? Nope, I can still get results like I want in Blender. It would just save me a few steps and streamline the process.
|
|
By Kelleytoons - Last Year
|
Well, not to belabor this (because I already feel like I'm wasting too much time pondering :>) clearly I DO see a difference between PNG and EXR - there is a definite perceived increase in dynamic range. Whether this holds after I render out as an MP4 remains to be seen (I guess I can figure out some way to compare realistically - perhaps side by side comparisons set up in Premiere or some such). And that's all I'm after.
So I guess I'll just play around and see. But thank you for at least asking about this because it's something I never would have even thought of.
|
|
By Nirwana - Last Year
|
@Kelleytoons I've been using EXR (usually 32, sometimes 16 bit) for final renders for several years now, but I need source material that has more than 8 bits for HDR content, so I have some hands-on experience with this.
As you probably know, mp4 is just a container format; the important thing is the codec in that container. If you intend to produce only 8-Bit SDR (standard dynamic range) content (for example for most YouTube uses), using EXR is probably not going to do much for you except provide a little more latitude for color grading. However, if you need material for HDR (which has commonly at least 10 bits), EXR may be a good choice. (I'm talking about "real" HDR as the stuff on UHD blu-ray discs, not what iClone calls "HDR"). But for HDR content, you also need to do post production work as well as have video editing software and playback capabilities compatible with HDR (e.g. a video (not graphics) card for driving an HDR-capable monitor/TV). And that is a whole new can of worms. (I produce my HDR content for personal consumption and for YouTube, meaning I can upload HDR content to YT and the viewers can watch it as HDR as long as certain conditions are met (i.e. the YT app on their device is HDR-capable and so is the viewing device itself; modern TVs and smartphones usually are). Everybody else will be watching an SDR version automatically created by YouTube (the same way as YT automatically creates lower resolution versions of any high definition content you may have uploaded).
So, unless you have a way to view and work with content at more than 8 bit color depth, EXR is probably not worth it for you. When editing your videos using EXR, you will also need a beefy machine. Your new one may (!) be capable of doing that in RT without pre-rendering or proxies but you would have to test that with the NLE of your choice. (The problem is not so much the GPU but the fact that EXR files are large: at 3840x1622 pixel and 32 bit mine are usually 65-70 MB per frame, which means that depending on your frame rate you may need between 2,000 and 4,000 MB/s data throughput from your storage media for playback plus, of course, any processing by the computer itself. My video editing system's storage cannot do that in RT, but there are ways to work with that.)
there is a definite perceived increase in dynamic range Well, if you are watching/comparing this on an 8-bit monitor/screen, I kind of doubt you are seeing more dynamic range (because 8 bit is all you can see); it may look that way to you because of the gamma used. If you are watching this on an HDR-capable monitor and have configured Windows and your machine properly to output HDR, then possibly.
|
|
By AutoDidact - Last Year
|
Well, if you are watching/comparing this on an 8-bit monitor/screen, I kind of doubt you are seeing more dynamic range (because 8 bit is all you can see); it may look that way to you because of the gamma used. If you are watching this on an HDR-capable monitor and have configured Windows and your machine properly to output HDR, then possibly. Thank you for explaining all of this!! particularly the importance of end to end consistency such as the output /display capability of THE VIEWING DEVICE.
I have had so many frustrating debates with Daz studio users over why uber bloated4- 8K textures maps on Daz products are essentially wasted unless you are viewing them on an monitor with 8K resolution in a non compressed format. (ie NOT a web optimized jpg unloaded to your Daz gallery).
And let’s no even get into what “8K” details are lost during the render itself such as Denoising in Daz studio Iray or motion blur during animation renders. Nor the fact the human eye’s technically do not “see” in 4K or 8K anyway which is another discussion entirely.:Wow:
|
|
By Nirwana - Last Year
|
Slightly off-topic, but I think I need to comment on this:
I have had so many frustrating debates with Daz studio users over why uber bloated4- 8K textures maps on Daz products are essentially wasted unless you are viewing them on an monitor with 8K resolution in a non compressed format. I'm not sure that high-resolution textures are wasted even on monitors with less than 8K resolution, because: a) If you have one texture covering an entire character and you do close-up at a render resolution of 4K (my standard for non-Shorts content), you do see the short-comings of textures that are only 4K (if you have separate 4K textures for, say the head, body, arms, legs, etc. that is a different story because then the entire character has more than 4K texture resolution but even then 4K may be too little for a Leone-style close-up).
b) Very large props, such as landscapes, buildings, ships, etc. suffer from low-res textures because, often, only a part of the entire thing is visible at any time but shows the limitation of the texture resolution that is covering that part. That is why I prefer procedural materials/textures for landscapes, rocks, etc. because they have "infinite" resolution (AFAIK iClone does not support procedural textures/materials as of now).
c) it is usually easier to down-scale textures (if you don't need the resolution) than to up-scale them (although the latter has gotten better with AI scalers); so I'd always prefer higher resolution over lower.
d) If a creator uses textures for clothing/characters with less than 4K I will remark on that negatively in my Marketplace Reviews for the reasons mentioned above.
Nor the fact the human eye’s technically do not “see” in 4K or 8K anyway which is another discussion entirely. Yes and no. You are correct when you talk about taking in an entire screen at once, However, especially for large-size screens, we usually focus our attention (and thus the best resolution our eyes can muster) on only an area of the screen (which is the point of interest, guided by DOF or other ways) and then you do see a difference between 2K and 4K (I have not personally seen 8K, so I can't comment on that). As I write this text, my eyes are focused only on these words on my 42" 4K monitor; I'm not even "seeing" the task bar at the bottom of the screen. Also, my preferred viewing distance is 1.0 to 1.2 times the screen diagonal for 4K content on a 4K screen; if your viewing distance is greater than that, you mileage may vary.
|
|
By AutoDidact - Last Year
|
As I write this text, my eyes are focused only on these words on my 42" 4K monitor; I'm not even "seeing" the task bar at the bottom of the screen.
You have a 4K monitor that can actually display 4K resolution. (I have not personally seen 8K, so I can't comment on that)
You have likely already seen something online that was created /rendered at 8k but it’s 8K resolution was moot as your 4K monitor can only display up 4k resolution, even less so, for the majority of people ,who buy Daz content, and do not even have 4k monitor like yours. (if you have separate 4K textures for, say the head, body, arms, legs, etc. that is a different story because then the entire character has more than 4K texture resolution.
That is the problem I see with just blindly baking out every texture from substance painter at 4-8k for every surface on a Daz product.
One guy complained about an SUV model with dozens of parts ( all with 4k textures) and it would not fit into the VRAM of his GPU to render unless he bought another Daz product to scale down the maps in the scene or do it manually himself in photoshop etc.
Yet Daz wonders why they have utterly failed to gain a foot hold in the game content market.
But yes back on topic: layered EXR’s are a most vital output format for the film /VFX industry for layered compositing of render passes and most the the major 3DCC render engines( except blender )can output them.
|
|
By Kelleytoons - Last Year
|
Wow - kind of gotten far afield and I apologize if I'm the one who drifted this so OT.
In any case, yes, my 4K monitor is configured for HDR and these (old) eyes do see a difference. My machine is more than capable of handling what I am doing, however, because I'm not doing 4K work (I render in 1080p). While there is a slowdown in rendering (my gut tells me it takes about twice as long per frame in iClone) and a *somewhat* slower workflow in Premiere (but there it's not even noticeable - because the 4080 is so fast Premiere actually both works AND renders in near real time) I don't think I'd have an issue even if I worked in 4K.
In any case, at least we have another tool in our toolset (always helpful).
|
|
By Nirwana - Last Year
|
|
You have likely already seen something online that was created /rendered at 8k but it’s 8K resolution was moot as your 4K monitor can only display up 4k resolution OK, to be more precise what I meant was: I have not seen native 8K content on a native 8K display in real life. (I'm afraid if I did, I might feel the need to upgrade everything to 8K...)
Native 4K content on a native 4K display is just normal to me. All my TVs (some of which I use as computer monitor or as a preview monitor for video editing) are 4K (or UHD to be more precise), and almost all the computer monitors (with the exception of a few notebook displays) are also UHD (although, the very first UHD display that I got back in 2016 was the one on an HP Omen notebook, which I still have). Unless I render shorts (at 1080x1920p30), my own content is usually 4K (3840x1622 with 24 or 30 FPS), with the exception being "draft resolution" test renders.
One guy complained about an SUV model with dozens of parts ( all with 4k textures)and it would not fit into the VRAM of his GPU to render unless he bought another Daz product to scale down the maps in the scene or do it manually himself in photoshop etc. OK, in that case 4K textures on dozens of (small) parts seems a bit much. I not familiar with Daz content; I don't use Daz and I don't use their marketplace either. Characters and clothing (and a few props) I buy via the RL platform, for other models I usually go to cgtrader, kitbash3d, and occasionally turbosquid; most of the non-character models available on the RL platforms are too low-poly for my taste (since they were primarily designed for RT or game use and I have no interest in either).
But yes back on topic: layered EXR’s are a most vital output format for the film /VFX industry for layered compositing of render passes and most the the major 3DCC render engines( except blender )can output them. I guess so. However, I don't use different render passes and I don't do compositing either. Instead, I only render a beauty pass as an EXR image sequence and turn that into an HDR video (usually) in Davinci Resolve Studio. I may do a little color grading and add my "logo" and audio (usually music and/or sound FX) in DVRS, but no real "compositing". I stopped using all paid Adobe products years ago, so no After Effects for me, and I'm not particularly looking forward to trying to learn Fusion, either.
|
|
By Nirwana - Last Year
|
|
because the 4080 is so fast Don't you mean 4090? The 4080 is not all that fast. BTW: I have an RTX 4090 in one of my systems and it can still easily take me several minutes to render a single frame (not in iClone; I don't have iClone on that machine); the same is true for GPU-based simulations. The problem is that it is quite difficult to find (somewhat affordable) systems with more than one 4090 because of the size of that card most mobos will only accommodate one (and the pro version of the 4090 for multi-GPU use, the RTX 6000 Ada, is way too expensive for hobby use); I certainly hope the 5090 will help in that regard.
Also, as I said before, with EXR the problem (in my experience) is not so much the GPU but the throughput of the storage system; although, if you only do 1080p, that should not be much of an issue then either due to the smaller file sizes.
|
|
By AutoDidact - Last Year
|
I guess so. However, I don't use different render passes and I don't do compositing either. Instead, I only render a beauty pass as an EXR image sequence and turn that into an HDR video (usually) in Davinci Resolve Studio. I may do a little color grading and add my "logo" and audio (usually music and/or sound FX) in DVRS, but no real "compositing". I stopped using all paid Adobe products years ago, so no After Effects for me, and I'm not particularly looking forward to trying to learn Fusion, either. .Those layered EXR’s are mostly used in VFX laden films where CG elements have to be seamlessly composited with live actors & set footage example: the “Transformers” franchise. all of that layered Data, (Alpha channels Multi-pass and multi-channel images etc) becomes very important to matching the lighting on the “Autobots” ,or whatever, to the live shot footage on the most granular levels.
If you never combine live footage with Major CG elements such as entire characters or sets, you do not need such granular level of control IMHO ( I personally never do this). Now in the rare case where you might be rendering CG elements( Fluids sim ,faking a crowd scene etc) in one program and composting with CG elements rendered in another but then you must endure the additional vicissitudes of matching camera moves. (again I personally never do this).
In the case of Iclone, I honestly do not see any true advantage to outputting layered EXR’s as any compositing with non Iclone rendered footage or a live plates will be very ,VERY obvious.
|
|
By Nirwana - Last Year
|
I don't mix live footage with CG content; my stuff is 100% CGI (that the point, for a variety of reasons, I no longer wanted to use video footage shot with a camera).
Since I do all my simulation work inside of C4D as well, there is really no need for me to do that with outside tools (thus, no compositing). Again, I see the point for multi-layer EXRs and compositing, as you say, in VFX heavy film work but not really for my use cases (or for those of most hobbyist iClone users). There may also be cases for compositing, when being able to change DOF (by way of a z-depth render pass) or to "re-light" scenes in post may be useful, but I'd rather get it right the first time and not fix it in post. ;-)
|
|