90x faster? Really? I mean, Really? So If I do a 1 minute rendering on on my quad core computer then run it again on the single core computer it will take 90 minutes? Dude I don’t think so. i could run the same rendering on my old P3 1Ghz computer and it won’t take 90 minutes. that’s just ridiculous. |
Hehe. I know it sounds ridiculous. The thing about that is that I own both of the aforementioned machines. And yes- what my quad core renders in 1sec, my Prescott p4 renders 1min 30sec. I didn’t wait 90 hours to check, but it scales nicely in 1:90 fashion. Its scene dependant naturally, but in most scenes (I work) its like that. Personaly tested- multiple times.
Also, I’ve got a dual 2.0 Ghz computer and a dual dual core 3.2ghz computer. They do the same rendering and I only see about a 10-25% increase in speed depending on the render. according to what you are saying I should have seen about a 260% speed increase.
|
How do you figure that?
So why is mine so much faster? Drivers. Not hardware speed. |
That’s another thing I wanted to address first time you mentioned it, but it slipped my mind. I'll try to be brief, but there is just so much to say.. so please bear with me.
I’m going to guess that you don’t know much if anything about CUDA and the gf 8 series of cards. I’ll try to be brief about this. In a nutshell its like this: Intel's fastest core 2 quad has raw speed of about 50 gigaflops. nVidia 8800gtx card has about 300 gigaflops. (for reference ATi HD 3870x2 has ~ 1 teraflops raw speed). When you look at the numbers its easy to see that graphic cards are way superior to modern CPUs. So why do we use CPU at all? Its all about drivers and software optimizations.
Its not that we don’t have the hardware to do it, its that the current software logic is biased and based on logic that single core is going to be doing all the work. That core 2 quad 6600 I mentioned earlier being 90 times faster is true in max rendering, seeing how 3ds max is one of the few programs optimized to work in multi core setting. That same CPU would hardly be any faster for 99% of older games and software.
Same problem is with current graphic cards. They don’t have general purpose processors, but by nature, they are all dedicated to something. GPGPU (general purpose graphic processing unit) is something that’s a buzz word for some time. It doesn’t take an Einstein to see that if current GPUs could do other tasks CPU would be greatly overrun. Look at it like this: quad core CPU has 4 cores/ 8800gtx has 128 / ATi HD 3870x2 has 640… just some food for thought.
nVidia took a good step forward with its GPGPU CUDA programming language and as of recently announced reward for anyone that optimizes LAME (mp3 encoding) for their CUDA. If any programmer makes it utilize 8800gtx fully , we could encode 1hour of music to hifi mp3 in mere seconds. Getting CUDA to run LAME encoder is as easy as clicking go (to programmers), but making it use all available hardware resources is another thing completely. And with huge amount of processing cores, parallelism is where all is at today.
I’ll take a brief look at the current CPU technology of core 2 architecture compared to netburst (Pentium 4). On clock per clock basis, core 2 is about 2 times faster than netburst. If you calculate that to quad core vs old single core P4, you should get that new quad is 4x2 times faster. Well that’s true for most things- 3ds max rendering isn’t one of them. Changes are so numerous and vast that any apples to apples comparison is imposible.
/
Also, take a look at current max viewport’s performance. Its single core. Look at reactor- single core. Look at particles- single core (not all, but most of them are) And its not like that dual core CPUs are scarce today. It’s that multi core logic programmers are scarce. Rendering speed has been a bone to too many people and that’s the only reason its optimized. When you look at hardware industry in general- dual core support is rare, multi core support is even rarer. Rendering is one of very very few bastions where multi core means something.
/
Also, if you look at my post
here.
You will see how nVidia constantly has better drivers and is able to beat ATi with their inferior hardware. Its not that ATi is stupid or anything, its just that nVidia focuses on games, and ATi on “professional” approach (trying to put it nicely). The thing there is that ATi card will perform better in Max than its nVidia counterpart, while nVidia card will perform better in games than its ATi price range counterpart.
I’ll only be able to test it on the type of scenes we work on, so it won’t always be true for everyone. |
The difference in the underlying architecture of the cards is evolved beyond gf 6 series that not only you, but all will benefit from it- regardless of the scene type. Trust me.