Community
VRED Forum
Welcome to Autodesk’s VRED Forums. Share your knowledge, ask questions, and explore popular VRED topics.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Video Memory for GPU RT

5 REPLIES 5
SOLVED
Reply
Message 1 of 6
Christian_Garimberti
913 Views, 5 Replies

Video Memory for GPU RT

Hi guys,

working on a big project i almost have reached the gpu memory limit of my RTX5000.

I got two question about this situation:

 

1. how can i better optimize the model to optimize the memory consumption? material and textures, geometry, etc. The question is: working on what i could have the best result? Or it is impossible to know it in advance?

 

2. i got 2 workstation:

  1. one is a new HP Z6, single Xeon, 128Gb Ram 2 x RTX 5000.
  2. The other an old Z620, double Xeon, 64GbRam, 2 x RTX 5000.

Working on the same scene, with the Z6 i can start a GPU RT rendering, while with the Z620 i have a CUDA error for memory overflow.

  1. Same VRED Version, 2021.1
  2. same nvidia driver (451.77) reinstalled with clean install.
  3. Sli active
  4. different windows release (Z6 W10 2004, Z620 W10 1904)

Watching the GL Info i see that the memory consumption is slightly different, but in both case is lower then the availability (16384).

Z620

GCZ620.png

Z6

Z6.png

On the Z620, working with smaller scene, i can start the GPU RT.

How is it possible? Maybe not only the gpu memory is involved in this case?

 

Best

Chris

 

Christian Garimberti
Technical Manager and Visualization Enthusiast
Qs Informatica S.r.l. | Qs Infor S.r.l. | My Website
Facebook | Instagram | Youtube | LinkedIn

EESignature

5 REPLIES 5
Message 2 of 6

Hi,

 

Windows itself uses GPU memory for drawing its User interface, so a good chunk will already be used up by that. The different Windows versions might also have an effect, I am not sure if 2004 finally fixes the bug of GPU memory loss that has been in there since the initial windows release but it might be possible.

 

As for optimizations:

First make sure you turn on GPU or CPU raytracing before you load the scene so no memory is used up but OpenGL. There is no sharing between the OpenGL and raytracing data so in worst case if you load the scene with OpenGL enabled and then turn on raytracing you might end up with twice the memory consumption. Usually OpenGL should swap out data if memory is required but I am not sure how this works with CUDA based allocations the GPU raytracer does.

Then remove everything that is hidden or not used. At the moment we build everything in the scenegraph to allow for fast variant switching. However, this of course consumes more memory. So deleting everything that is hidden and doesn´t need to be shown should be your first thing to do. Especially Environments can consume a lot of memory so only have those environments in your scene you actually need.

 

The last thing to consider is that the denoiser can require quite a lot of memory. So if you are already at the memory limit you might need to not use the denoiser if you can´t free up enough memory.

 

Kind regards

Michael



Michael Nikelsky
Sr. Principal Engineer
Message 3 of 6

Hi Michael,

thank you for your advices.

 

I will wait for the 2004 W10 update on the Z620 to confirm if W10 2004 has a better influence on Vram management.

 

I tried your second advice to turn on CPU or GPU rt before loading the scene, but:

If i start CPU rt , load my scene, switch to GPU rt, the memory used is more or less the same compared to when i load in OGL and then switch to GPU RT. I got 15285 changing from OGL to GPU RT and 15303 loading with CPU RT and then switching RT to GPU RT.

Trying to load the scene directly with GPU RT active cause a Vred crash. Attached the log files.

Doing some test i think it could be the denoiser. Saving the scene with the denoiser off, and then opened with the GPU RT active has worked. After the scene was opened i have activated the denoiser and everything continued working.

If does not happens on smaller scene. maybe this happens while i am near the memory limit.

 

Best

Chris

 

 

 

 

Best

Chris

 

Christian Garimberti
Technical Manager and Visualization Enthusiast
Qs Informatica S.r.l. | Qs Infor S.r.l. | My Website
Facebook | Instagram | Youtube | LinkedIn

EESignature

Message 4 of 6

The crash is interesting since it doesn´t happen in the raytracer but in the OpenGL code trying to allocate memory, not sure for what though, maybe a manipulator or something like that.

But it indeed looks like there is no memory left on the GPU causing the crash. Reducing the scene size somehow is probably the only way to solve this at the moment. 

 

Kind regards

Michael



Michael Nikelsky
Sr. Principal Engineer
Message 5 of 6

... or upgrading with some bigger GPU... 😁

Thank you

Chris

Christian Garimberti
Technical Manager and Visualization Enthusiast
Qs Informatica S.r.l. | Qs Infor S.r.l. | My Website
Facebook | Instagram | Youtube | LinkedIn

EESignature

Message 6 of 6

Hi Michael,

i would like to add a note to the crash i posted before.

Changing from GPU RT to OGL, with the scene at the memory limit, i found these errors in the terminal

 

Failed to unregister GL Buffer with Cuda

CUDA_ERROR_INVALID_GRAPHICS_CONTEXT: invalid OpenGL or DirectX context

OptiX Shutdown completed

 

Maybe this make sense for you.

 

Best

Chris

 

Christian Garimberti
Technical Manager and Visualization Enthusiast
Qs Informatica S.r.l. | Qs Infor S.r.l. | My Website
Facebook | Instagram | Youtube | LinkedIn

EESignature

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Technology Administrators


Autodesk Design & Make Report