I found out some disturbing things.
nVidia has throttled opengl so that a GTX-285 runs 4x faster than a GTX-580.
They have apparently throttled CUDA also.
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=291789#Post291789http://forums.newtek.com/showpost.php?p=1210364&postcount=27I'm hoping that CUDA throttling is less than the opengl, and a post on another forum for a 460 CUDA being much faster than a 260 CUDA gives me some hope, but ... the 580 CUDA could actually end up slower than the GTX-285, I just don't know. I've search for CUDA benchmarks, but don't really see any.
The real-world CUDA use in Premiere shown here seems to indicate that CUDA performance hasn't changed - and certainly hasn't gotten better.
http://www.studio1productions.com/Articles/PremiereCS5.htmnVidia can do anything they want, but to advertise the product as if it is faster without telling a customer the real story is in quite poor form. I can't spend days trying to become a video card 'expert'. I just want valid info so that I can make a decision based on the facts.
I no longer trust nVidia at all - which is a shame as I'd been very happy with the GTX-285, and always praised vVidia products.
** EDIT *** this benchmark seems to indicate CUDA is getting faster in newer GTX cards!
http://kernelnine.com/?p=218 BTW - if OpenGL calls like "glReadPixels()" on a GTX480 are ~4 times slower than on a GTX285, how does this actually affect 3D users day to day?
If we're using opengl to to display an estimation of the render for tweaking, I would guess 3d apps would be using the glWritePixels - much like a video game would to display the game.
Is this only going to affect users to want to read what opengl has 'rendered' - or is this going to affect normal usage of 3D apps? When would you need to read those buffers, rather than display them?
Unless you're using ?is it vRay ??, I thought all actual rendering was done in software, at least for final output, but ... I could be confused?