Hi guys,
working on a big project i almost have reached the gpu memory limit of my RTX5000.
I got two question about this situation:
1. how can i better optimize the model to optimize the memory consumption? material and textures, geometry, etc. The question is: working on what i could have the best result? Or it is impossible to know it in advance?
2. i got 2 workstation:
Working on the same scene, with the Z6 i can start a GPU RT rendering, while with the Z620 i have a CUDA error for memory overflow.
Watching the GL Info i see that the memory consumption is slightly different, but in both case is lower then the availability (16384).
Z620
Z6
On the Z620, working with smaller scene, i can start the GPU RT.
How is it possible? Maybe not only the gpu memory is involved in this case?
Best
Chris
Christian Garimberti
Technical Manager and Visualization Enthusiast
Qs Informatica S.r.l. | Qs Infor S.r.l. | My Website
Facebook | Instagram | Youtube | LinkedIn
Solved! Go to Solution.
Solved by michael_nikelsky. Go to Solution.
Hi,
Windows itself uses GPU memory for drawing its User interface, so a good chunk will already be used up by that. The different Windows versions might also have an effect, I am not sure if 2004 finally fixes the bug of GPU memory loss that has been in there since the initial windows release but it might be possible.
As for optimizations:
First make sure you turn on GPU or CPU raytracing before you load the scene so no memory is used up but OpenGL. There is no sharing between the OpenGL and raytracing data so in worst case if you load the scene with OpenGL enabled and then turn on raytracing you might end up with twice the memory consumption. Usually OpenGL should swap out data if memory is required but I am not sure how this works with CUDA based allocations the GPU raytracer does.
Then remove everything that is hidden or not used. At the moment we build everything in the scenegraph to allow for fast variant switching. However, this of course consumes more memory. So deleting everything that is hidden and doesn´t need to be shown should be your first thing to do. Especially Environments can consume a lot of memory so only have those environments in your scene you actually need.
The last thing to consider is that the denoiser can require quite a lot of memory. So if you are already at the memory limit you might need to not use the denoiser if you can´t free up enough memory.
Kind regards
Michael
Hi Michael,
thank you for your advices.
I will wait for the 2004 W10 update on the Z620 to confirm if W10 2004 has a better influence on Vram management.
I tried your second advice to turn on CPU or GPU rt before loading the scene, but:
If i start CPU rt , load my scene, switch to GPU rt, the memory used is more or less the same compared to when i load in OGL and then switch to GPU RT. I got 15285 changing from OGL to GPU RT and 15303 loading with CPU RT and then switching RT to GPU RT.
Trying to load the scene directly with GPU RT active cause a Vred crash. Attached the log files.
Doing some test i think it could be the denoiser. Saving the scene with the denoiser off, and then opened with the GPU RT active has worked. After the scene was opened i have activated the denoiser and everything continued working.
If does not happens on smaller scene. maybe this happens while i am near the memory limit.
Best
Chris
Best
Chris
Christian Garimberti
Technical Manager and Visualization Enthusiast
Qs Informatica S.r.l. | Qs Infor S.r.l. | My Website
Facebook | Instagram | Youtube | LinkedIn
The crash is interesting since it doesn´t happen in the raytracer but in the OpenGL code trying to allocate memory, not sure for what though, maybe a manipulator or something like that.
But it indeed looks like there is no memory left on the GPU causing the crash. Reducing the scene size somehow is probably the only way to solve this at the moment.
Kind regards
Michael
... or upgrading with some bigger GPU... 😁
Thank you
Chris
Christian Garimberti
Technical Manager and Visualization Enthusiast
Qs Informatica S.r.l. | Qs Infor S.r.l. | My Website
Facebook | Instagram | Youtube | LinkedIn
Hi Michael,
i would like to add a note to the crash i posted before.
Changing from GPU RT to OGL, with the scene at the memory limit, i found these errors in the terminal
Failed to unregister GL Buffer with Cuda
CUDA_ERROR_INVALID_GRAPHICS_CONTEXT: invalid OpenGL or DirectX context
OptiX Shutdown completed
Maybe this make sense for you.
Best
Chris
Christian Garimberti
Technical Manager and Visualization Enthusiast
Qs Informatica S.r.l. | Qs Infor S.r.l. | My Website
Facebook | Instagram | Youtube | LinkedIn
Can't find what you're looking for? Ask the community or share your knowledge.