Community
3ds Max Forum
Welcome to Autodesk’s 3ds Max Forums. Share your knowledge, ask questions, and explore popular 3ds Max topics.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

iray: Titan X vs Quadro M6000

25 REPLIES 25
SOLVED
Reply
Message 1 of 26
Anonymous
22680 Views, 25 Replies

iray: Titan X vs Quadro M6000

Hi,

 

i recently stumbled upon a review, where they show a 3ds MAx iRay benchmark (admittedly Max 2013, so quite old iray version), comparing the M6000, K5200, Titan X and GTX980.

 

 

Schockingly the M6000, thus being the Titan X twin, shows a 3 times (!!!) faster performance compared to the Titan. So are we back in those times, where consumer cards have built in brakes to make Quadro more tasty? Yes it's an older iRay, but it's old for all above listed cards..Even the older Kepler smashes the Titan...

25 REPLIES 25
Message 2 of 26
Out-Of-Light
in reply to: Anonymous

I don’t think this test is accurate.

 

Nvidia applied a patch for the Maxwell Desktop GPU's for Iray in Version 2015, and it makes quite a difference.

 

A 500 iteration test on a GTX 970 went from taking 1 minute 24 seconds down to 23 seconds!

 

So I would imagine seeing as iray only uses single precision that is not crippled on the Titan-X, and both cards have the same chipset, if the test was done again in 3ds max 2015 with the Maxwell patch that the Titan-X will outperform the M6000 as it has slightly faster clock rates.

 

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 3 of 26
Anonymous
in reply to: Anonymous

Well, i'm not talking about a possible hardware bottleneck of the Titan X compared to the M6000 (as there is no such bottleneck). I'm talking about driver optimizations for the Quadro line, which are purposely not implemented for the consumer line. So looking at those benchmarks in 3dsmax 2013, what else could explain the huge speed advantage of the Quadros...Maybe the patch, which you are talking about, will also affect the M6000 and make it even faster...So all Maxwell chips would be faster but the Quadros would still outperform their consumer counterparts by a factor of 3.

Message 4 of 26
Out-Of-Light
in reply to: Anonymous

Hi Betreff,

 

I didnt say anything about bottlenecks?? Bottlenecking has nothing to do with what I wrote. I mentioned about the GPU clock frequencies, that they are slightly faster on the Titan-X. Both the Titan X and M6000 have exactly the same single precision abilities at an estimated 7 TeraFlops, only double presicion has been crippled on the Tixan X which Iray does not use.

 

As far as I know Maxwell GPU's are not supported in 3ds Max 2013, they will not be recognised by Iray and revert to CPU rendering instead. The patch for 3ds Max 2015 is for both Quadro's and Geforce Maxwell cards, that updates the iray dll library to recognise Maxwell cards, so I am wondering how this review you speak of managed to test Maxwell GPU's in 3ds Max 2013 as they are not compatable?

 

When the real benchmarks start flowing onto the internet comparing the M6000 and Titan X, I am 100% certain that the Titan X and possibly the GTX 980ti ( if it has the fully unlocked cores) will be faster than the M6000 in iray. For poeple that use Blender that takes advantage of double precision compute, the M6000 will outperform the Titan X as the DP has been crippled.

 

The M6000 and Titan X are exactly the same chipset and have exactly the same number of cores, and exactly the same single precision compute capabilities.

 

There are advantages to Quadro cards. They are a higher yeild rate out of the factory so less likely to fail over long periods of use. They are clocked lower to reduce heat in rack environments, better stability and increase lifespan, and they usually have a few extra features like better colour depth.

 

The Titans are good for people that dont need double precision, but need the single compute power of the quadro, and the increased Vram for large scene/data sets, and they will last as long as the quadro's if their heat is managed correctly when rendering.

 

I suspect that benchmark you found is a work of fiction, and that the Titan-X and M6000 will be almost identical when rendering in iray, with the Titan X slightly faster because of the faster clock rates.

 

Lets wait for the true benchmarks in 3ds max 2015 with the maxwell patch applied. Maybe I will be eating my words but I doubt it.

 

I can explain to you what bottlenecking is if you want?

 

Cheers. 

 

 

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 5 of 26
Anonymous
in reply to: Anonymous

With HW-bottleneck i meant the double precision 🙂

But, afaik, DP on Maxwell is poor on both, Titan and Quadro. I believe i even read an article, where DP on the M6000 was just as poor as on the Titan X. All previous GPU-architectures always had better DP on the Quadros - so Maxwell goes a different road. So both cards really seem to be twins.

 

The review -as you can see - is made by tom's Hardware, which is quite well known. I'm also very curious, how they managed to test with Max 2013...Maybe it's just a typo and they used 2015..Still, the huge performance difference needs to be explained...

Message 6 of 26
Out-Of-Light
in reply to: Anonymous

Nvidia crippling DP compute to promote their extortionate Quadro/Tesla cards, isn’t a bottleneck, but thanks for clearing up the misunderstanding.

 

If Nvidia decided to also cripple SP compute though drivers, as the M6000 and Titan X have the exact same SP compute specs, this would be a seriously bad business move from Nvidia, and the millions of freelancers, and small companies and studios that use Titans for SP compute GPU rendering or number crunching, will stick to their current Titans and not upgrade, and the only people that will end up buying Titan X's are the gamers wanting to game in 4K that can afford it, so Titan X sales will take a complete dive.

 

If Nvidia really have crippled SP compute though drivers on the Titan X, then I feel really sorry for those who rushed out and upgraded their old titans before seeing SP compute benchmarks, but I really cant see their being a reason for the M6000 and Titan X performing differently hardware wise.

 

I searched TomsHardware for that benchmark. Searched M6000, Titan X, Iray, 3ds Max and combinations, but cant find it anywhere.

 

I've read the review of the Titan X and M6000 on Toms and those benchmarks don’t exist as far as I can find.

 

I've looked at the specs of the M6000 and Titan X side by side, and there is no difference as far as ROP’s, Cores, Sp compute, bandwidth etc goes.

 

Although I said it would be bad business for Nvidia to cripple the SP compute by drivers, it also would not surprise me if they did. They purposefully overpriced the Titan Z when it was launched to stop their existing Quadro/Tesla customers straying away from their extortionate professional cards, which are essentially exactly the same as the desktop cards with a few extras, and as I said before higher Yield rate etc.

 

Nvidia always promote that their Quadro cards deliver up to 30% more viewport performance than the desktop equivalent, up to meaning whatever their flagship model is at the time, but you can pay up to 700% over the price of the desktop card, for 30% improvement, now maybe my maths is wrong, but that to me is just bad business sense.

 

I wish Nvidia would stop trying to force their extortionate Quadro and Tesla cards on professionals!

 

I’ve owned Quadro cards and their desktop equivalents in the past and tested them side by side, and the performance increases are always barely noticeable even when working on extremely complex models. Some Quadro owners will argue that you get missing Vertex or Edges using desktop cards but we have never experienced this. Others will argue that they have had Titans fail on them, but again we have never experienced this, touch laminated wood. The reason some Titans fails is because of bad IT management. Our artists are trained to always crank up the fans to 85% before rendering in iray using EVGA Precision X. This keeps the temps around or bellow the 70c mark, so our 2 year old titans are still going strong.

 

If Nvidia have crippled SP compute the Titan X will end up being the same failure as the Titan Z. Basically you could buy 2 Titan Blacks, have the same power and save yourself £1000, or you could have 3x Titan Blacks and still save a little money, and have much better performance, so the Titan Z was a complete fail. You can pick them up now for around £1200 which is what it should have been priced at in the first pace.

 

If Nvidia really want to charge their professional customers extra ( a lot) for the privilege (apparent?) of using their Quadro Drivers, why don’t they give people the option of purchasing a licence for their desktop cards, rather than rip-off professionals with over priced hardware? The cloud, it is possible!

 

Anyways,

 

Do Nvidia think that if they cripple SP compute, that current Titan owners or potential Titan owners, from freelancers to small studios are really going to move over to Quadro cards and spend say £16K on 4x M6000 for each workstation? They cant afford it! So Nvidia would rather loose £3500 per workstation from millions of potential customers, and instead get £0 Zero!

 

Nvidia do sometimes make seriously bad business moves, so it wouldn’t surprise me, but my hardware knowledge and business sense says that those benchmarks are seriously wrong. But if they are correct then over the next year the Titan X will slowly be making its way to the discontinued burial ground right next to the Titan Z.

 

I’ll keep my eye out over the next few months to see if any more benchmarks in Iray appear.

 

Shame, I was also considering upgrading to 4x Titan X, but now, no way until I see some definitive benchmarks.

 

 

 

 

 

 

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 7 of 26
Anonymous
in reply to: Out-Of-Light

sorry for not posting the link to the original benchmark - i didn't do it, because it's the german version of toms hardware:

http://www.tomshardware.de/geforce-quadro-workstation-grafikkarte-gpu,testberichte-241759-3.html

 

 

An you're right about the cooling failures - i always do use watercooling on the gpu to be on the safe side (lost two gtx 460 before moving to water).

 

Well, the Titan X is selling already very, very well - even better than the original Titan back in those times. So there are quite a lot enthusiast gamers also using it. I'm looking for GTX 980 and Titan X benchmarks in iray for a long time and there isn't really much.

 

It's only:

http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2014-5

http://www.daz3d.com/forums/viewthread/53771/P150

 

and those from tom. But still no iray+ benches for Quadro compared to Titan. I even don't know, which iray version is used in max 2016??

Message 8 of 26
Out-Of-Light
in reply to: Anonymous

After looking at the link you sent me, and being able to see the other benchmarks in Octane 2.7, Blender 2.73,ratGPU and Luxmark 2.0, I am now 100% sure that Nvidia have not crippled single precision compute on the Titan X, and that the benchmark for Iray is a rotten egg. They need to rerun the test using 2015 or the newly released 3ds max 2016, that I installed yesterday. I think 2016 already has the Maxwell patch installed.

 

As for why the M6000 worked and the Titan X did not, no idea, but I am sure after seeing the other bench mark results that its definitely a driver/software problem that will be fixed, as results in the benchmarks above are always similar to iray benchmarks, so the Titan X should be ever so slightly faster than the M6000 in iray because of the higher clock rates.

 

Hardware specs showed that the Titan X and M6000 are identical for Single precision, so I was annoyed that Nvidia might have actually crippled the Titan-X though drivers going only by the one benchmark you posted, but looking at the other benchmarks they definitely have not, which is great. As I said before, it would have meant tens of millions in losses for Nvidia, if they excluded potential upgrades from freelancers and small studios using Titans for GPU rendering that cant afford the extortionate high end Quadro Cards.

 

But I will still be waiting for new benchmarks in iray (hopefully performed correctly), as th other benchmarks do not show how much faster the Titan X is than previous gen cards like the Titan, Titan Black or 780Ti, so it leaves people like me that are thinking about an upgrade wondering if its really worth the expense. If I replace my 4x Titan (2688core) with 4x Titan X only to find renders are a minute faster, that’s a complete waste of £3500, so the results are going to have to be mind blowing for me to part with my cash.

 

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 9 of 26
Anonymous
in reply to: Out-Of-Light

Well, let's hope for the best...As iray belongs to Nvidia, they can do what they want with it. So Octane, Vray RT and all the others may fully use the power of Titan X, while iray stays crippled...

On the other side, if they really cripple it, iray might vanish very fast from the horizon... As it makes much more sense and it is much cheaper to buy a different software (renderer) for max, which fully utilizes my cards- than buy a bunch of new cards! If iray is crippled, i'll move to something else.

 

So NVidia, you might be reading... - don't make the wrong decisions, us freelancers are A LOT  😉

Message 10 of 26
Out-Of-Light
in reply to: Anonymous

I'm 100% sure now that the Iray benchmark on Tom's Germany is a wrotten egg.

 

If as you say Nvidia purposfully crippled the Maxwell desktop Cards for iray, you would be right in saying that people would eventualy just stop using it.

 

I've no idea what the actual statistics are for people using destop cards in max vs Quadro, but I am guessing out of all the millions of freelanmcers and small studios out there compared to the big studios, the Quadro cards are in the minority. So it would be a very stupid move on Nvidia's part to do this.

 

As we both agree Nvidia have made some really dumb decissions in the past, in fact the very recent past with the Titan Z. But I think Nvidia will have better sense than to cripple the Titan X in iray. If the benchmarks justify the upgrade then the Titan X will be just as popular with the DCC crowd as it is with the gaming crowd. Nvidia surely must know this, and are only really interested in sales, so wont cripple the TitanX in iray if they have any business sense.

 

So all thats left is to wait for benchmarks comparing the TitanX against the older Titans to see if the performance gains are really worth the upgrade cost. I'm sure if the test is done correctly and if the M6000 is included in the benchmarks that it will be trailing ever so slightly behind the Titan X.

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 11 of 26
danko
in reply to: Out-Of-Light

********* FINALY iRay 2015 benchmarks for Titan-X *************************

 

Have rechecked some of the usual benchmark sites and finaly we got some numbers!! 😉

http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2015

Message 12 of 26
Anonymous
in reply to: danko

That's great, thanks Danko!!

 

Message 13 of 26
danko
in reply to: Anonymous

Hey,

 

but be aware that in the currect 3dsmax 2016 setup the Titan-X isn't recognized as CUDA device - don't know how log it will take to get an update...

 

Read more here:

 

http://forums.autodesk.com/t5/shading-lighting-and-rendering/new-titan-x-for-iray-rendering/m-p/5616...

Message 14 of 26
danko
in reply to: danko

**********************************************************************************

 

! CONFIRMED !

 

Titan-X is working with 3dsmax 2016 Iray.

 

Speed is fine (EVGA SC Version) - nearly 6 times  faster than my 4GHz Intel 5960X...

 

(My old GTX 670 was only 2 times faster than the 4GHz Intel 5960X)

 

**********************************************************************************

 

http://forums.autodesk.com/t5/shading-lighting-and-rendering/new-titan-x-for-iray-rendering/m-p/5625...

Message 15 of 26
Out-Of-Light
in reply to: danko

Hi danko,

 

Its nice that your Titan X is working in max 2016.

 

But is is performing as expected?

 

There is an Iray comparison test on maxforums.org. It was originally designed for Max 2013, but I performed the test I think in Max 2014, and below are the results I got.

 

1x E5-2687w 46 secs
2x E5-2687w 29 secs

1x Asus GTX Titan 25 secs
2x Asus GTX Titan 13 secs
3x Asus GTX Titan 09 secs
4x Asus GTX Titan 07 secs

4x Asus GTX Titan + 2x E5-2687w 06 secs

 

Looking at the benchmarks I have seen so far for the Titan X, it should be roughly 46% faster than the older 2688 core Titan in single precision Cuda app's like iray.

 

So if you download the file below, run the max file, and set iray to use one GTX Titan X only, with no CPU cores, it should render in approx 12 seconds. You need to keep the file settings as is, with iterations set to 500, and at the same resolution, but only select one Titan X with no CPU cores before hitting render.

 

http://dl.dropbox.com/u/23316655/iray_test_just_hit_render.zip

 

My system is busy right now, or I would run the tests again in 2016, but I will try this later.

 

Cheers.

 

 

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 16 of 26
Out-Of-Light
in reply to: Out-Of-Light

That dropbox lnk no longer works, but I still have the file, which I have attached.

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 17 of 26
Out-Of-Light
in reply to: Out-Of-Light

I tried the test again in Max 2016 with some strange results. I tried the tests several times not letting the CPU’s/GPU’s clock down. So I then ran the tests again in 2015 just to make sure there haven’t been changes in my system since the last test effecting results, but the results in 2015 SP3 were the same as the 2014 results.

 

2016

 

1x e5-2687W 54 secs

2x E5-2687W 32 secs

 

1x GTX Titan 20 secs

2z GTX Titan 14 secs

3x GTX Titan 11 secs

4x GTX Titan 09 secs

 

4x Gtx Titan + 2x E5-2687w (HT) 07 secs.

 

2015

 

1x E5-2687W 47 secs

2x E5-2687W 30 secs

 

1x GTX Titan 25 secs

2z GTX Titan 13 secs

3x GTX Titan 09 secs

4x GTX Titan 07 secs

 

4x GTX Titan + 2x E5-2687w 06 secs.

 

So it seems CPU rendering is slightly slower in Max 2016 than previous versions, and that GPU rendering with multiple GPU’s does not scale as well as previous versions. So even though GPU rendering on a single card is faster by 5 seconds o my system, using multiple GPU’s is actually slightly slower than using 2015.

 

I noticed the first time I used iray in 2016 that even if I use all four GPU’s that there is no longer any slow down in the operation system.

 

I thought at the time that’s great! But now understand why, and I think I am right in saying that 2016 makes sure the OS is still functional by utilising some of the power of the GPU “used by Windows” so you can work and still render.

 

If you look at the benchmarks for 3x GPU’s in 2015 and 4x GPU’s in 2016, they are the same speed! So what Nvidia/AutoDesk have done, is even if you have the GPU “ used by windows” selected, it doesn’t utilise it fully to keep the OS working, or reverts to another GPU if the “used by windows“ GPU is the only one selected. So if I need to meet a deadline, and only have rendering left to do, I am better off now rendering in Max 2015, and having all four GPU's selected in 2016 is actually only as powerful as having 3 GPU's in 2015.

 

So looking at the new results in 2016, and that the Titan X should be 46% faster than the older 2688 core Titan that takes 20 seconds with one card, you should be getting a time of around 11 seconds with one Titan X.

 

That should have been 25 seconds and 13 seconds before ops.

 

 

 

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 18 of 26
danko
in reply to: Out-Of-Light

Hi srplus,

 

hey this took a long time to start comparing the upgrade options - but finaly we can start on our own - so here some new numbers from me:

 

(I've removed the GTX 670 yesterday, because it wasn't longer useful in my setup and now I only have one primary card...)

 

Thermal management is really a point with these cards - 4x Titan will use about 1KW power... Just one Titan-X tends to grill the neighbour card even if one slot is left... my case is perfectly ventilated but this beast tried to burn my Blackmagic Intensity card to hell... so air cooling is a real challange even with aggressive fan settings 😉

 

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Benchmark: iray_test_just_hit_render.max

date: 20150512

 

Setup:
X99 Workstation Win7 X64 64GB DDR4 Intel 5960X @4GHz
1x EVGA Titan-X SC (1316 MHz Boost during Benchmark)

 

3dsmax 2016
  CPU+GPU: 09s
  1x Titan-X SC: 12s
  1x Intel 5960X 4GHz: 39s

 

3dsmax 2015
  CPU+GPU: 13s
  1x Titan-X SC: 18s
  1x Intel 5960X 4GHz: 37s

 

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Message 19 of 26
Out-Of-Light
in reply to: danko

Thats Great,

 

So from your results, it looks like the Titan X may be worth the upgrade.

 

Seems like two Titan X should give me similar results to my 4 cards, but if I am going to upgrade I will get four.

 

Shame about the scaling in 2016 though.

 

I don’t care about the 1000watts using 4 cards, I have dual Antec HCP PSU’s using the O/C Link and cutting down render times is more important.

 

As for the thermals, having four 250w cards sandwiched together means I have to ramp up the fans at render time, but this is the same for SLI gaming, if I was to run say Heaven benchmark continuously, the thermals for the three top cards would reach the 80C mark and start to throttle, and this would be the same for the Titan X, so its necessary to regulate the fans better to prolong the life of your GPU(s). Never had a Titan fry, and been running them around 40 to 80 hours a month for 2 years rendering at full load, and the temps never go above 72C usually when the fans are ramped up using precision X.

 

Sometimes its the motherboard that cant cope with the heat.

 

I work with HD images and usually have to use 11,000 iterations upwards for noise free images, say an 1280x720 interior architectural scene, so using four cards that’s approx 15 to 25 minutes depending on materials, transparency etc, not just an 8 second render so the cards get really hot. As the Titan X would appear to be approx 45% faster than the older 2688 core Titan, that should reduce render times to around 8 to 14 minutes.

 

4x Titan X may seem like over kill, but I sometimes have to produce 30 or 40 noise free images in a day nearing the end of a project, so a reduction of 6 to 11 minutes per image is worth the expense as the cards will pay for themselves within a few months.

 

Thanks for running the tests for me.

 

I hope AutoDesk fix the scaling problem in 2016.

 

• 3ds Max 4.2 though to 2021 / MudBox 2020 / Fusion360
• Gigabyte Aorus Xtreme TRX40 Motherboard
• AMD Threadripper 3970X processors (32 core / 64 thread )
• ThermalTake custom cooling solution
• 64GB G.Skill DDR4 3600Mhz C14 Quad Channel Memory
• 2x Nvidia Titan RTX 24GB GPU's +NVlink
• 2x Asus PG348Q 3440 x 1400 IPS Monitor’s
• Wacom Intuos Pro large graphics tablet
• 2TB Seagate FireCuda 520 NVMe (O/S & App's)
• 1TB Seagate FireCuda 520 NVMe (Projects)
• 2TB Seagate HDD (Media storage, Batch render files)
• CORSAIR 1600W PSU
• Pioneer Blue-Ray Writer(External)
• Roccat Leadr Mouse
• Roccat Vulcan 121 Keyboard
• Lian Li D011 Dynamic XL case
• O/S Windows 10 Professional x64
Message 20 of 26
danko
in reply to: Out-Of-Light

Hi,

 

regarding benchmark numbers...

 

for most of my scenes I found a typical performance ratio of:

 

3dsmax 2016

100% CPU Intel 5960X 4GHz

300% Titan-X SC

 

but:

 

if I take a look at the default 3dsmax demo scene "Studio_scene_share.max" I got some different numbers:

 

GPU only: 28s

CPU only: 176s

 

here the Titan-X shines and is 6x faster than CPU...

 

What exactly makes it so much faster here?

Complex texture maps should be better rendered by the GPU - materials maybe? For my scenes I mostly use the iRay material but the demo scene is using Arch & Design materials...

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Autodesk Design & Make Report