Community
Arnold for Cinema 4D Forum
Rendering with Arnold in CINEMA 4D using the C4DtoA plug-in.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Different Displacement between CPU and GPU Rendering

7 REPLIES 7
Reply
Message 1 of 8
adploshko
702 Views, 7 Replies

Different Displacement between CPU and GPU Rendering

Hi,

I've been experimenting with Arnold for C4D ver. 4.0.3.2 (Arnold 7.0.0.3), and I found an odd phenomenon. When I render a displacement shader, the results vary between using CPU and GPU. With CPU, there seems to be more displacement detail than with the GPU render, even though the shader and project settings stay unchanged. 

 

Is there a way to make the displacement render identically between devices or is this a bug? 

 

Here are the 2 render examples: 

 

CPU Render:

arnold CPU (render SubD) 5 min 3 secs.png

 

GPU Render:

arnold GPU (render SubD) 30 secs.png

 

Thanks 

7 REPLIES 7
Message 2 of 8
peter_horvath
in reply to: adploshko

Do you have autobump enabled?

There was a similar issue reported recently, regarding displacement and autobump: https://forums.autodesk.com/t5/arnold-for-cinema-4d-forum/cpu-gpu-difference-when-material-applied-t...


// Peter Horvath
// C4DtoA developer
Message 3 of 8
adploshko
in reply to: peter_horvath

Disabling Auto-Bump does make the displacement results identical. However, disabling it seems to make the details soft and less pronounced.

 

So I tried adding detail to the displacement by using a bump map, and once again, the discrepancy in detail between CPU and GPU becomes apparent. I think it becomes even more noticeable in the examples below. 

 

I like the amount of detail the CPU seems to render, but I want that level of quality on the GPU. Is there no way to make this happen?

 

CPU (Manual Bump):

Arnold CPU Manual Bump.png

 

GPU (Manual Bump):

Arnold GPU manual bump.png

 

Message 4 of 8
thiago.ize
in reply to: adploshko

A workaround until this is fixed is to increase the amount of subdivision steps of the mesh. A higher res mesh will be able to show more detail without using autobump. The downside of course is that this will use more memory, so try to use as few additional subdivision iterations as possible.

Message 5 of 8
adploshko
in reply to: thiago.ize

Unfortunately, this isn't an option for me. Both examples already have maximum subdivisions in the Arnold tag. And adding more geometry through the SubD generator will greatly slow down my scene if it has more than a few objects in it at that added level of detail. So I don't see how adding anymore tessellation can be practical. 

Message 6 of 8
thiago.ize
in reply to: adploshko

Arnold doesn't have a limit on the number of subdivisions. It can easily generate more triangles than you have memory for. Maybe there's a setting missing? There's the global max for subdivision which you probably want to keep at 255 https://docs.arnoldrenderer.com/display/A5AFCUG/Subdivision and then there's the per object subdivision which you set in https://docs.arnoldrenderer.com/display/A5AFCUG/Subdivision+Settings and actually controls how many .... For the per object setting, if you're using the adaptive error, then you might need to make that value smaller so it subdivides further.

Message 7 of 8
molora8946
in reply to: adploshko

always use auto bump when using displacements as it does bring more detail into the renders 🙂

Message 8 of 8

So, from what I underestand is that GPU rendering in Arnold is not yet complete. I was testing GPU rendering for 1 hour, and already found a gamebreaking bug. Not being able to use autobump in SSS is a dealbreaker. And it makes me wonder what outher bugs I'm going to find out if I play around a bit more.

 

So what do you think genuinely, is GPU rendering in Arnold ready? Or should I try other GPU renderers?

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Autodesk Design & Make Report