Arnold General Rendering Forum
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Can reducing the bucket size aid in reducing memory consumption ?

Message 1 of 4
328 Views, 3 Replies

Can reducing the bucket size aid in reducing memory consumption ?

Hi all,


I'm working with a complex scene featuring a dense polymesh, and it crashed during the rendering process. The error message pointed to memory allocation, leading me to believe that the renderer is demanding an excessive amount of memory. While considering the possibility of reducing the bucket size (from 64 to 16) as a solution, I observed that the memory consumption in the render log remained almost unchanged. I'm seeking confirmation on whether reducing the bucket size can indeed be beneficial and if there are other effective strategies or settings to prevent crashes caused by excessive memory consumption.

The error message is as follows:

00:01:53 25017MB | starting 8 bucket workers of size 64x64 ...
00:02:24 18786MB WARNING | can't allocate 138991248 bytes (virtual memory : 31700 Mb)
00:02:24 18786MB | * FAILED ASSERTION at 00:00:00, pixel (896, 440)

00:02:24 18790MB ERROR | signal caught: error C0000005 -- access violation
* CRASHED in Ordinal0 at 00:00:00, pixel (896, 440)
* signal caught: error C0000005 -- access violation
* backtrace:
>> 0 0x00007fff520991d1 [ai ] Ordinal0
* 1 0x00007fff52098fc8 [ai ] Ordinal0
* 2 0x00007fff5209a626 [ai ] Ordinal0
* 3 0x00007fff520b5ea6 [ai ] Ordinal0
* 4 0x00007fff523977c7 [ai ] Ordinal0
* 5 0x00007fff526f6b55 [ai ] AiOutputIteratorDestroy
* 6 0x00007fff526f86d0 [ai ] AiOutputIteratorDestroy
* 7 0x00007fff523d4d92 [ai ] Ordinal0
* 8 0x00007fff526f9212 [ai ] AiOutputIteratorDestroy
* 9 0x00007fff5231346d [ai ] Ordinal0
* 10 0x00007fff5249c965 [ai ] AiTraceProbe
* 11 0x00007fff526f6f34 [ai ] AiOutputIteratorDestroy
* 12 0x00007fff525e274a [ai ] AiOutputIteratorDestroy
* 13 0x00007fff524aa312 [ai ] AiLightsPrepare
* 14 0x00007fff52593db1 [ai ] AiOutputIteratorDestroy
* 15 0x00007fff5259cee9 [ai ] AiOutputIteratorDestroy
* 16 0x00007fff524a713c [ai ]
* 17 0x00007fff524a0dcb [ai ]
* 18 0x00007fff5271678f [ai ] AiOutputIteratorDestroy
* 19 0x00007fff52717c91 [ai ] AiOutputIteratorDestroy
* 20 0x00007fff5270f7cc [ai ] AiOutputIteratorDestroy
* 21 0x00007fff523c955a [ai ] Ordinal0
* 22 0x00007fff52d10766 [ai ] AiADPSendPayload
* 23 0x00007fff52d10383 [ai ] AiADPSendPayload
* 24 0x00007fff52a592b4 [ai ] AiADPSendPayload
* 25 0x00007fff52d119f4 [ai ] AiADPSendPayload
* 26 0x00007fff531819cc [ai ] AiADPSendPayload
* 27 0x00007fff53181b6d [ai ] AiADPSendPayload
* 28 0x00007ffffefb1bb2 [ucrtbase] configthreadlocale
* 29 0x00007ff8006d7614 [KERNEL32] BaseThreadInitThunk
* 30 0x00007ff8015e26f1 [ntdll ] RtlUserThreadStart
* loaded modules:
* 0x00007fff51e30000 ai
* 0x00007ffffef90000 ucrtbase
* 0x00007ff8006c0000 KERNEL32
* 0x00007ff801590000 ntdll

Render error. Time elapsed: 21 min 35.57 s
Node 'L020_bg_data': Render unsuccessful. Process crashed. Exit code: -1073741819


Any suggestions are welcome, and appreciate any assistance.

Message 2 of 4
in reply to: huseila

Does the crash happen if you hide the dense poly mesh?

Are you able to separate the dense mesh into smaller mesh objects?

What happens when you reduce the bucket size?

From the docs:

The size of the image buckets. The default size is 64x64 pixels, which is a good compromise; bigger buckets use more memory, while smaller buckets may perform redundant computations and filtering and thus render slower but give initial faster feedback.


Maybe increasing the log verbosity will give more information?




Lee Griggs
Arnold rendering specialist
Message 3 of 4
in reply to: lee_griggs

Hi Lee,

I can achieve it by decreasing the number of subd iterations from 2 to 1, but this compromises the smoothness of the shape. Concealing certain geometries is an option, but I require them to be visible on the screen. Reducing the bucket size did not significantly alleviate the rendering crashes. Below are my render settings and mesh smooth settings for the object, along with some information from the render log. I believe these settings are reasonable, as none of them have exceptionally high values.

If my machine with 64 GB memory faces the possibility of peaking at 5X GB CPU memory usage during rendering, are there settings to prevent resource overload and crashes, even if it means slower rendering? I'd prefer a slower but stable render over constant crashes on a render farm.

I've noticed an unexpected discrepancy in the "ginstance" and "instance_source" counts in render log. The render log shows 8,653,085 "ginstance" and 107 "instance_source" but I haven't observed a significant number of instances in my Katana scene. Does this imply something different, like the total polygon count of instances, rather than the number of instance objects?



Message 4 of 4
in reply to: huseila

Increase your system swap file, then it won't crash, though it might run a bit slower as you run out of RAM.

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Technology Administrators