Ok, I think I need to set the record straight here in terms of Unreal. I think everybody is referring to the Star Wars Demo they just showed (if not, I am talking about this one: https://www.youtube.com/watch?v=J3ue35ago3Y).
I attended a session at this years GTC that gave a full rundown of what this really was.
So the whole thing rendered on a DGX-Station with 4 GV100 (so roughly about 65k USD without any discounts) at FullHD (1920x1080) with (mostly) 24fps - so roughly 50 MPixels/s. For VR you are usually rendering 3024x1680 at 90fps, so 450 MPixels/s. So you will need 9 of these stations (so about 585k USD - and you will probably get a discount so let´s just say half a million USD should do it).
But this is just what you need to render that one scene. So let´s take a closer look at it, shall we?
The environment is completely baked only the floor gets a layer of reflection. The characters sum up to roughly 2 Mio Triangles, so let´s be gentle and round up to 3 Mio. There are exactly two textured area lights in the scene that are animated and evaluated with a combination of analytical integration and 1 shadow ray per light. The analytic integration is nice and makes the illumination noisefree but it is actually wrong since the shadows are only calculation how much of the whole light is in shadow. But it is good enough so I won´t complain about this one. Then there is ambient occlusion done with 2 rays per pixel and 1 reflection ray (I don´t remember if it was 1 or 2 bounces). The shader also exist in two variants, one high quality for the directly visible surfaces and one massively stripped down for when a reflection ray hits it.
What you get from all of this is a pretty noisy image and this is where the magic happens.
Nvidia has designed some very clever filtering for the arealight shadows, the ambient occlusion and the reflection. Combined with temporal antialiasing you get a pretty good looking image - but I would not call this GI since there are essentially no diffuse indirect illumination rays and the glossy reflection rays are also quite limited.
Now back to the original question about GI with a 30 Mio Triangle model: So....10 times more geometry (yeah I know, raytracing is quite good at this but it will still probably cost you a factor of 2 or so), probably much more complex shader (I mean the stormtrooper were chrome and untextured plastic with maybe a slight bump and a roughness map, it does not get much simpler than that) and real(!) GI (for the sake of simplicity let´s do a photonmap with final gathering for this and skip the ambient occlusion so this might come cheap, at least for static scenes) and to not make it completely unrealistic, still use the denoising filters....
....well I would say, quadruple the number of DGX-Stations and see if it works and fits within your 2 Mio USD Budget...
Don´t get me wrong, I think they did an awesome job with this one - but there are quite a few differences between an Engineering Datasets and a finely handtuned demoscene.
And about GPU raytracing in VRED: We´ll see ... 😉
Michael Nikelsky
Sr. Principal Engineer