Can VR benefit from dual RTX6000 ada?

Can VR benefit from dual RTX6000 ada?

luqiang.han
Advocate Advocate
873 Views
4 Replies
Message 1 of 5

Can VR benefit from dual RTX6000 ada?

luqiang.han
Advocate
Advocate

Autodesk suggest to activate SLI with NVLINK to use VR HMD. However, RTX6000 ada does not support NVlink. If I want to use VR mode, is 2 GPU only get the same FPS as 1 GPU?

0 Likes
Accepted solutions (1)
874 Views
4 Replies
Replies (4)
Message 2 of 5

seiferp
Community Manager
Community Manager
Accepted solution

Yes, there is benefit in using dual GPU in VR as VRED uses 1x GPU per eye in stereo.

0 Likes
Message 3 of 5

michael_nikelsky
Autodesk
Autodesk

To clarify a bit on this: For OpenGL NVLink is only used for a faster transfer of the rendered image from the second GPU to the first GPU. Without NVLink this still works fine (SLI must still be enabled since the GPUs must be set to a specific configuration in the driver), the transfer is just a bit slower. It is still fast enough for VR.

The same applies to GPU raytracing in general, there is a minor performance impact by not having NVLink (at least in theory, in practice it doesn´t matter).  A different story however is memory sharing for GPU raytracing. This does not work without NVLink since the PCIE bus is just way too slow, especially when tiny chunks of data need to be transferred between the GPUs. That´s why memory sharing does not work an Ada GPUs.



Michael Nikelsky
Sr. Principal Engineer
Message 4 of 5

luqiang.han
Advocate
Advocate
another question, if I have a PC with dual RTX6000 ada, and two PCs with dual A6000. And use all the three PCs to make a cluster under realtime raytracing. Could I get better performance as 3 PCs with dual A6000?
Or the performance depends on the worst PC?
0 Likes
Message 5 of 5

michael_nikelsky
Autodesk
Autodesk

There should be some basic load balancing happening between the systems but I haven´t tested it. Note however that especially the denoising and DLSS (and to a lesser extend glow/glare) will severely limit performance in a cluster since the number of buffers that need to be send to the head node will quickly reach PCIE and network limitations. Without denoising it should be scaling ok though.



Michael Nikelsky
Sr. Principal Engineer
0 Likes