Hi,
I've been experimenting with Arnold for C4D ver. 4.0.3.2 (Arnold 7.0.0.3), and I found an odd phenomenon. When I render a displacement shader, the results vary between using CPU and GPU. With CPU, there seems to be more displacement detail than with the GPU render, even though the shader and project settings stay unchanged.
Is there a way to make the displacement render identically between devices or is this a bug?
Here are the 2 render examples:
CPU Render:
GPU Render:
Thanks
Hi,
I've been experimenting with Arnold for C4D ver. 4.0.3.2 (Arnold 7.0.0.3), and I found an odd phenomenon. When I render a displacement shader, the results vary between using CPU and GPU. With CPU, there seems to be more displacement detail than with the GPU render, even though the shader and project settings stay unchanged.
Is there a way to make the displacement render identically between devices or is this a bug?
Here are the 2 render examples:
CPU Render:
GPU Render:
Thanks
Do you have autobump enabled?
There was a similar issue reported recently, regarding displacement and autobump: https://forums.autodesk.com/t5/arnold-for-cinema-4d-forum/cpu-gpu-difference-when-material-applied-t...
Do you have autobump enabled?
There was a similar issue reported recently, regarding displacement and autobump: https://forums.autodesk.com/t5/arnold-for-cinema-4d-forum/cpu-gpu-difference-when-material-applied-t...
Disabling Auto-Bump does make the displacement results identical. However, disabling it seems to make the details soft and less pronounced.
So I tried adding detail to the displacement by using a bump map, and once again, the discrepancy in detail between CPU and GPU becomes apparent. I think it becomes even more noticeable in the examples below.
I like the amount of detail the CPU seems to render, but I want that level of quality on the GPU. Is there no way to make this happen?
CPU (Manual Bump):
GPU (Manual Bump):
Disabling Auto-Bump does make the displacement results identical. However, disabling it seems to make the details soft and less pronounced.
So I tried adding detail to the displacement by using a bump map, and once again, the discrepancy in detail between CPU and GPU becomes apparent. I think it becomes even more noticeable in the examples below.
I like the amount of detail the CPU seems to render, but I want that level of quality on the GPU. Is there no way to make this happen?
CPU (Manual Bump):
GPU (Manual Bump):
A workaround until this is fixed is to increase the amount of subdivision steps of the mesh. A higher res mesh will be able to show more detail without using autobump. The downside of course is that this will use more memory, so try to use as few additional subdivision iterations as possible.
A workaround until this is fixed is to increase the amount of subdivision steps of the mesh. A higher res mesh will be able to show more detail without using autobump. The downside of course is that this will use more memory, so try to use as few additional subdivision iterations as possible.
Unfortunately, this isn't an option for me. Both examples already have maximum subdivisions in the Arnold tag. And adding more geometry through the SubD generator will greatly slow down my scene if it has more than a few objects in it at that added level of detail. So I don't see how adding anymore tessellation can be practical.
Unfortunately, this isn't an option for me. Both examples already have maximum subdivisions in the Arnold tag. And adding more geometry through the SubD generator will greatly slow down my scene if it has more than a few objects in it at that added level of detail. So I don't see how adding anymore tessellation can be practical.
Arnold doesn't have a limit on the number of subdivisions. It can easily generate more triangles than you have memory for. Maybe there's a setting missing? There's the global max for subdivision which you probably want to keep at 255 https://docs.arnoldrenderer.com/display/A5AFCUG/Subdivision and then there's the per object subdivision which you set in https://docs.arnoldrenderer.com/display/A5AFCUG/Subdivision+Settings and actually controls how many .... For the per object setting, if you're using the adaptive error, then you might need to make that value smaller so it subdivides further.
Arnold doesn't have a limit on the number of subdivisions. It can easily generate more triangles than you have memory for. Maybe there's a setting missing? There's the global max for subdivision which you probably want to keep at 255 https://docs.arnoldrenderer.com/display/A5AFCUG/Subdivision and then there's the per object subdivision which you set in https://docs.arnoldrenderer.com/display/A5AFCUG/Subdivision+Settings and actually controls how many .... For the per object setting, if you're using the adaptive error, then you might need to make that value smaller so it subdivides further.
always use auto bump when using displacements as it does bring more detail into the renders 🙂
always use auto bump when using displacements as it does bring more detail into the renders 🙂
So, from what I underestand is that GPU rendering in Arnold is not yet complete. I was testing GPU rendering for 1 hour, and already found a gamebreaking bug. Not being able to use autobump in SSS is a dealbreaker. And it makes me wonder what outher bugs I'm going to find out if I play around a bit more.
So what do you think genuinely, is GPU rendering in Arnold ready? Or should I try other GPU renderers?
So, from what I underestand is that GPU rendering in Arnold is not yet complete. I was testing GPU rendering for 1 hour, and already found a gamebreaking bug. Not being able to use autobump in SSS is a dealbreaker. And it makes me wonder what outher bugs I'm going to find out if I play around a bit more.
So what do you think genuinely, is GPU rendering in Arnold ready? Or should I try other GPU renderers?
Can't find what you're looking for? Ask the community or share your knowledge.