Installation & Licensing
Welcome to Autodesk’s Installation and Licensing Forums. Share your knowledge, ask questions, and explore popular Download, Installation, and Licensing topics.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Reply
Message 1 of 19
Anonymous
908 Views, 18 Replies

NVIDIA question.

I have a NVIDIA graphics card, but when i try to install: "NVIDIA MAXtreme™" i get this error: NVIDIA %P setup could not detect an NVIDIA workstation graphics card to use with NVIDIA %P(tm). Please install an NIVIDIA graphics card.

Why is it that they say i have no NVIDIA graphics card...WHEN I DO! i even have a NVIDIA media center on the bottom right of my desktop right next to the time of the day which is 10:37.

Can sombody please explain to me why this error occurs?
18 REPLIES 18
Message 2 of 19
Maneswar_Cheemalapati
in reply to: Anonymous

Maxtreme only works on Nvidia Quadro graphics workstation cards. Not on Nvidia consumer game cards.
Message 3 of 19
Anonymous
in reply to: Anonymous

So is there a driver thats better "Proformance wise"? that supports a lower version then Quadro? I installed my NVIDIA card myself...
Message 4 of 19
Steve_Curley
in reply to: Anonymous

Just install the latest drivers from nVidia.

Max 2016 (SP1/EXT1)
Win7Pro x64 (SP1). i5-3570K @ 4.4GHz, 8Gb Ram, DX11.
nVidia GTX760 (2GB) (Driver 430.86).

Message 5 of 19
eodeo
in reply to: Anonymous

Maxtreme is just a name now. It used to add performance under openGL a while back when D3D was still the underdog. Since that changed, gaming cards overtook the workstation cards in performance, and maxtreme adds little to no benefits to Quadro line of the cards.

Like i said- maxtreme is just a name now.
Message 6 of 19
andy.engelkemier
in reply to: Anonymous

You couldn't be More wrong.
I keep looking to see when maxtreme will be out for 2009. Whatever the maxtreme drivers do, they do it extremely well. It's not just a name.
I use it, and have used it for quite a while now. I open the same scene my coworkers do. The scene is very heavy and they have to display as box just to pan around the scene. I opened the same scene, shaded the scene, and rotated around with no problem. They are using D3D with the same Quadro FX 3450 I have. This is not just in once case either.

Yes, for many poly objects, like heavy characters the Maxtreme driver may not help. But for heavy architectural scenes, or in my case product scenes, it does an amazing job. We are also usually the people who usually have workstation cards since we also use programs like ProE, Rhino, Studio Tools, and other programs that really take advantage of the card.

Even with Max 2009, I still think their viewport speed blows. Or at least it will until I get my maxtreme for it.
Message 7 of 19
eodeo
in reply to: Anonymous

You couldn't be More wrong.
Whatever the maxtreme drivers do, they do it extremely well. It's not just a name.
I use it, and have used it for quite a while now. I open the same scene my coworkers do. The scene is very heavy and they have to display as box just to pan around the scene. I opened the same scene, shaded the scene, and rotated around with no problem. They are using D3D with the same Quadro FX 3450 I have. This is not just in once case either.


I’m going to have to stay skeptical here and say that there must be some oddity with your test. Your graphic card q fx 3450 is equivalent to gf 6800. I’m not really sure how qfx 3450 goes against qfx 4000, but since they’re both gf series 6 end 800, I’m going to say that they are within 10% of each other. I’m also going to say that my 6800gt is faster than my qfx 4000 or at least as fast. Only place where quadro is faster (not noticeably, but faster nonetheless) is openGL and SPECViewperf test. In is d3d to maxtreeme comparison as well as graphic to graphic card comparison they are about the same, with slight edge to 6800gt due to faster memory and core.

I’m going to suggest reading my post here.

In case you don’t feel like reading much, know just this- any 100$ card today will be at least 10x faster than your current card. In case of ati HD3870(180$) speed is going to be about 50x faster (actual and not an exaggeration)

Even with Max 2009, I still think their viewport speed blows. Or at least it will until I get my maxtreme for it.


May I assume that you're using windows Vista? Or at least the same 3 series old geforce 😉 ?

P.S. For Quadro to GeForce comparison go here.
Message 8 of 19
andy.engelkemier
in reply to: Anonymous

I am not using Vista, but we only use workstation cards here. We do product design and use ProE. it's kind of a must as that works primarily on OGL.
10x faster? Is that like a 3ghz computer will render 3x faster than a 1ghz computer? Because we all know there isn't a bit of truth to that.
there is a huge difference between benchmark testing and actual use. What I'm talking about here is actual use. I have an old 3 series geforce at home on a super old computer. It still kicks butt if you just compare it to one of the "faster" $100 cards today so long as you don't throw bump mapping and things like that at it. It just doesn't have the instruction set for that. But plain old 3D data, it does great. I don't use it anymore, but I thought i'd throw that in there for example. I loaded up 3dsmax on a friend's computer who had a more recent card and his didn't handle my data Any better than my old 3 series geforce. So what's this extra speed for?

I really don't care what the specs are. Show me a card that works, show me practical application and speed that way and I'm sold. Just telling me it's faster means nothing. I've seen benchmarks before that didn't pan out in the end. Andy why do so many cards get a huge improvement in actual speed when they get new drivers? Their specs didn't change.
Nvidia, although I don't like it, basically rips us all off because they write different drivers for the workstation cards even though they are pretty much the same. But they also consistently run faster at many of our tasks.

In my every day use I use max with a plugin called nPower. If you aren't familiar with it, it keeps the nurbs data inside max editable. That way I can change the polygon resolution whenever I want. It works well with our workflow. Straight poly's, I haven't noticed much of a difference, but when keeping the original npower data, which is nurbs based, I have noticed a huge increase in speed when using the maxtreme drivers. So for me, it's worth it. Not all workflows will benefit. I won't disagree there.

We will be using Autodesk showcase for something soon, and there it just looks like straight up speed and Tons of memory, so we are thinking about building an SLI rig with a bunch of 500 dollar 1gig gamer cards. I'm not confident it will work, but I can't argue with the specs....yet.
Message 9 of 19
eodeo
in reply to: Anonymous

Is that like a 3ghz computer will render 3x faster than a 1ghz computer?


Similar but no. It’s more of a: is a quad core cpu really 4x faster than a single cored one? And in case you don’t know answer to that, let me be the first to tell you- current intel core2 quad 6600 running at 2.4ghz is 90 times (90x!) faster than intel Pentium 4 Prescott 320 (single core) also running at 2.4 ghz (for 3ds max rendering). The technology advances so fast that you really cant compare apples to apples anymore. And for a more fair question (although irrelevant) is 4 cored core2 quad 6600 4x faster than itself running on only one core? Answer is No. It’s about 380% faster- ie 20% shy of full 4x.

have an old 3 series geforce at home on a super old computer. It still kicks butt if you just compare it to one of the “faster” $100 cards today so long as you don’t throw bump mapping and things like that at it.


I agree that there is a limit to how much juice you need. But you were the one that threw in “cannot pan without display as box”. If so, you need a faster card. If not, no amount faster card will matter. Its like ATi and nVidia now. ATi is 3x faster but its 600fps compared to 200fps – both much higher than needed 30fps.

Show me a card that works


I’d be glad to… but since distance is likely a problem, getting a 100$ card for yourself might be cheaper than 200$ ticket to here 😉

We will be using Autodesk showcase for something soon, and there it just looks like straight up speed and Tons of memory, so we are thinking about building an SLI rig with a bunch of 500 dollar 1gig gamer cards. I’m not confident it will work, but I can’t argue with the specs....yet.


SLI never worked in Max and I don’t think that’s about to change.
And for the record any 500$ GAMING card you buy today will be tons faster in anything you throw at it (given it has same/more ram- not a problem since your card has 256mb of ram as I understand it) Today- you cant find a 200$+ card with less than 512mb of ram.

In conclusion, try any of the latest cards and see how well they do. I agree that no amount of benchmarks relates to actual speed at hand. (which reminds me how a “slower” AMD 2500+ cpu feels 2x faster than intel p4 @ 3.6ghz under regular day to day use. No test will show you that.)
Message 10 of 19
andy.engelkemier
in reply to: Anonymous

90x faster? Really? I mean, Really? So If I do a 1 minute rendering on on my quad core computer then run it again on the single core computer it will take 90 minutes? Dude I don't think so. i could run the same rendering on my old P3 1Ghz computer and it won't take 90 minutes. that's just ridiculous.
I agree with a lot of what you are saying but be reasonable. Try not to put your opinion where fact should go. Or maybe you are getting confused with percent and times? Also, I've got a dual 2.0 Ghz computer and a dual dual core 3.2ghz computer. They do the same rendering and I only see about a 10-25% increase in speed depending on the render. according to what you are saying I should have seen about a 260% speed increase.

the reason I threw in the fact that they had to display items as box is because it's the same card. We are using the same card, but with different drivers I can do Much more. So why is mine so much faster? Drivers. Not hardware speed.

Also, the SLI I was talking about isn't for max. It's for Autodesk showcase. i haven't looked into that enough to know if it's supported. it was just a thought. I'm going to see if we can get a gaming card to compare and see what kind of differences in speed I can get. My original argument was for maxtreme. Those drivers are Much better than the drivers that come with the quadro cards. I haven't yet compared them with that of a gamer card. It'll be a while till I can do that though, since money is hard to come by unless it's easily justified. I'll try it out eventually though. Also, I'll only be able to test it on the type of scenes we work on, so it won't always be true for everyone.
Message 11 of 19
eodeo
in reply to: Anonymous

90x faster? Really? I mean, Really? So If I do a 1 minute rendering on on my quad core computer then run it again on the single core computer it will take 90 minutes? Dude I don’t think so. i could run the same rendering on my old P3 1Ghz computer and it won’t take 90 minutes. that’s just ridiculous.


Hehe. I know it sounds ridiculous. The thing about that is that I own both of the aforementioned machines. And yes- what my quad core renders in 1sec, my Prescott p4 renders 1min 30sec. I didn’t wait 90 hours to check, but it scales nicely in 1:90 fashion. Its scene dependant naturally, but in most scenes (I work) its like that. Personaly tested- multiple times.

Also, I’ve got a dual 2.0 Ghz computer and a dual dual core 3.2ghz computer. They do the same rendering and I only see about a 10-25% increase in speed depending on the render. according to what you are saying I should have seen about a 260% speed increase.

How do you figure that?

So why is mine so much faster? Drivers. Not hardware speed.


That’s another thing I wanted to address first time you mentioned it, but it slipped my mind. I'll try to be brief, but there is just so much to say.. so please bear with me.

I’m going to guess that you don’t know much if anything about CUDA and the gf 8 series of cards. I’ll try to be brief about this. In a nutshell its like this: Intel's fastest core 2 quad has raw speed of about 50 gigaflops. nVidia 8800gtx card has about 300 gigaflops. (for reference ATi HD 3870x2 has ~ 1 teraflops raw speed). When you look at the numbers its easy to see that graphic cards are way superior to modern CPUs. So why do we use CPU at all? Its all about drivers and software optimizations.

Its not that we don’t have the hardware to do it, its that the current software logic is biased and based on logic that single core is going to be doing all the work. That core 2 quad 6600 I mentioned earlier being 90 times faster is true in max rendering, seeing how 3ds max is one of the few programs optimized to work in multi core setting. That same CPU would hardly be any faster for 99% of older games and software.

Same problem is with current graphic cards. They don’t have general purpose processors, but by nature, they are all dedicated to something. GPGPU (general purpose graphic processing unit) is something that’s a buzz word for some time. It doesn’t take an Einstein to see that if current GPUs could do other tasks CPU would be greatly overrun. Look at it like this: quad core CPU has 4 cores/ 8800gtx has 128 / ATi HD 3870x2 has 640… just some food for thought.

nVidia took a good step forward with its GPGPU CUDA programming language and as of recently announced reward for anyone that optimizes LAME (mp3 encoding) for their CUDA. If any programmer makes it utilize 8800gtx fully , we could encode 1hour of music to hifi mp3 in mere seconds. Getting CUDA to run LAME encoder is as easy as clicking go (to programmers), but making it use all available hardware resources is another thing completely. And with huge amount of processing cores, parallelism is where all is at today.

I’ll take a brief look at the current CPU technology of core 2 architecture compared to netburst (Pentium 4). On clock per clock basis, core 2 is about 2 times faster than netburst. If you calculate that to quad core vs old single core P4, you should get that new quad is 4x2 times faster. Well that’s true for most things- 3ds max rendering isn’t one of them. Changes are so numerous and vast that any apples to apples comparison is imposible.

/
Also, take a look at current max viewport’s performance. Its single core. Look at reactor- single core. Look at particles- single core (not all, but most of them are) And its not like that dual core CPUs are scarce today. It’s that multi core logic programmers are scarce. Rendering speed has been a bone to too many people and that’s the only reason its optimized. When you look at hardware industry in general- dual core support is rare, multi core support is even rarer. Rendering is one of very very few bastions where multi core means something.

/
Also, if you look at my post here.

You will see how nVidia constantly has better drivers and is able to beat ATi with their inferior hardware. Its not that ATi is stupid or anything, its just that nVidia focuses on games, and ATi on “professional” approach (trying to put it nicely). The thing there is that ATi card will perform better in Max than its nVidia counterpart, while nVidia card will perform better in games than its ATi price range counterpart.

I’ll only be able to test it on the type of scenes we work on, so it won’t always be true for everyone.


The difference in the underlying architecture of the cards is evolved beyond gf 6 series that not only you, but all will benefit from it- regardless of the scene type. Trust me.
Message 12 of 19
andy.engelkemier
in reply to: Anonymous

Thanks for the info on some of the programming stuff. Brief but informative. That's how i like things anyway. More than that and I would have dozed off to sleep again. It's first thing in the morning for me after all. I was kind of aware of the limitations, but that clears a lot of things up. Cool initiative to get programming started by nVidia too, with the LAME encoding. Rewards are always a good way to get something going.

Would you actually recommend ATI for max then? I've just always been much happier with the nVidia drivers. And since I know how to optimize them already I like to stick to it, but I'm always open to check something else out. I'd have to also check with AfterEffects, Rhino, and ProE but I'd be interested if ATI can do a better job.

I still think something is wrong with your system that is getting a 90x speed difference. I've talked to a few friends. They haven't seen a 90x speed increase from computers 5 years ago and computers now. Also, none of us have ever seen max render in 1 second, just because it takes about 1 second for preprocessing, another couple seconds to save, and usually 1 or 2 seconds to render. I don't think I've ever seen a render under 2 seconds, even on a blank scene. What renderer are you using? I'm very interested to try it. Anyway, my point was that I have a single core laptop at home at half the Ghz at work where I have dual processor. The renderings at work are only 3 or 4 times faster. That's with 2 times the Ghz, hyperthreading, and 2 processors. I did that test using Mental Ray with GI and about 400k polys.
That's why I have a really hard time believing that you are getting 90x one processors that are the same speed with just a different architecture. I'd guess 4-16x MAX if things are set up correctly.
Message 13 of 19
eodeo
in reply to: Anonymous

Would you actually recommend ATI for max then?


I really would , if not for one small detail. Normaly I do recomend it, but i have to say this the one bad thing.

The problem oviously isn’t the latest and by the looks of it, I don’t see it getting solved any time soon. Pretty lousy on ATi part, but what can you do…

They haven’t seen a 90x speed increase from computers 5 years ago and computers now. Also, none of us have ever seen max render in 1 second, just because it takes about 1 second for preprocessing, another couple seconds to save, and usually 1 or 2 seconds to render.


From the one sec render time it should be obvious it was a test render. I use mental ray with final gathering only (no gi). Its extremely annoyingly long on the old P4. 1min 30sec is no exaggeration.

don’t think I’ve ever seen a render under 2 seconds, even on a blank scene.

My old p3 rendered blank scenes in 0sec.
Today, my blank-to-simple-test scenes render from 0 to 2 sec. Big previews take up to 10sec. Again, if u have massive geometry, preprocessing(converting to MR) alone will eat up your time. (this fact made me look into XSI as it has MR as the default renderer so no geometry conversion time… but im getting sidetracked here)

That’s with 2 times the Ghz, hyperthreading, and 2 processors. I did that test using Mental Ray with GI and about 400k polys.

That’s why I have a really hard time believing that you are getting 90x one processors that are the same speed with just a different architecture. I’d guess 4-16x MAX if things are set up correctly.


If you’d looked up the mentioned P4 Prescott 320, you would have noticed it’s actually a Celeron class CPU with 256k L2 cache. C2Q 6600 has 8mb. Than if you look that I had 1gb of ram and that I use 8gb now… All rest being equal (which it isn’t)... It adds up. I don’t know exactly how, but the numbers don’t lie and they’re consistent too.

To be fair, my old AMD 2500+ renders the same scene in ~30-40sec. So its only 30 to 40 times slower (I don’t remember the exact number in the tests). Even so, the new is much more than 4x2 times faster.

And don’t get me wrong, because it might seem like I’m complaining. I’m just happy it is that way 😉
Message 14 of 19
andy.engelkemier
in reply to: Anonymous

Maybe that's why I don't see that much of a difference. Our renders are nearly always huge amounts of data, so processors aren't always the bottleneck. It probably has to cache a lot of the data to the HD or something, which can really slow things down. We'd go 64bit and add mem, but we're stuck with dells (and you have to buy a 300 dollar fan , no joke, to put in more than 4gigs of memory), and some of our software doesn't work at all with 64bit in this corporate environment.

I'll definitely see if we can get a really good gamer card and compare, but I remain a little skeptical with their compatibility with all the different software we use. Although, our current cards aren't that great either. lol
We ALWAYS get overlay problems.
Message 15 of 19
Anonymous
in reply to: Anonymous

I found this link (http://www.tomshardware.com/de/GTX-280-260-GT200-Geforce-Nvidia,testberichte-240063-22.html), and this table (http://media.bestofmicro.com/M/9/110961/original/062.gif) shows impressive data: under SPECViewPerf 10 benchmark, the 3870x2 score was about twice of the gtx280. I bought cg hardware for more than 15 years. So I like a lot the nv initiatives, like the future MentalRAY on hardware one - truly gold for cg professionals. But those numbers show very deceptive performance for the gtx280, and we all know that it is a card with a very impressive gpu. If anyone could post links or info to viewperf 10 and other benchmarks with the gtx280 I will be very glad of. What's the point? Well, nv could boost a lot their selling numbers if leave this professional versus gaming boards nonsense. Today, all the technological differences between the "gaming" and the "quadros" are vaporware - and we all know this. The results are clear: nv is losing a lot of key and loyal customers - I'm one of them.
Message 16 of 19
eodeo
in reply to: Anonymous

I found this link (http://www.tomshardware.com/de/GTX-280-260-GT200-Geforce-Nvidia,testberichte-240063-22.html), and this table (http://media.bestofmicro.com/M/9/110961/original/062.gif) shows impressive data: under SPECViewPerf 10 benchmark, the 3870x2 score was about twice of the gtx280. I bought cg hardware for more than 15 years. So I like a lot the nv initiatives, like the future MentalRAY on hardware one - truly gold for cg professionals. But those numbers show very deceptive performance for the gtx280, and we all know that it is a card with a very impressive gpu. If anyone could post links or info to viewperf 10 and other benchmarks with the gtx280 I will be very glad of. What's the point? Well, nv could boost a lot their selling numbers if leave this professional versus gaming boards nonsense. Today, all the technological differences between the "gaming" and the "quadros" are vaporware - and we all know this. The results are clear: nv is losing a lot of key and loyal customers - I'm one of them.


You need to realize couple of things here:
1) the x2 card acts as x1 card as no "pro" program can use sli/crossfire/x2 cards. they will act as single GPU here. This means that the old gen 3870 is 2x faster than gtx 280. This can be confusing at first, until you realize:
2) gtx 280 has 240 unified shaders. hd 3870 has 320.

ATIs next gen card, out to compete with gtx 280 is HD 4870. It has 800 (!) unified processors.

I think you got my point as math is very simple.

Even so, those unified processors are not directly comparable on ati/nvi cards, and seeing how 9800gtx has 128 and is not 2x slower than gtx 280, just goes to show that not all of them are being utilized and that SPECheatTest prefers fewer faster processors (9800gtx) over more slower (gtx 280)- altho, it does show that ati card uses its arsenal far better.

to read more: http://www.tomshardware.com/reviews/FireGL-Quadro-Workstation,1995.html and scroll down to comments and find my name "eodeo". its a bit wordy... be forewarned.
Message 17 of 19
Anonymous
in reply to: Anonymous

Hi Eodeo,

I completely agree with your point - if an Autodesk app. can have full acceleration on mainstream hardware, why not the apps written exclusively for OpenGL? This attitude, today, is a shame from hardware manufacturers part, if not illegal at all. ATI have some interesting features on higher end cards, and this can have a significant value added to the professional - but those numbers of various times more for the same or even inferior hardware is unjustifiable. It`s nonsense. I sincerely hope to see this kind of manners vanish in the coming months - one way or another.
Message 18 of 19
Anonymous
in reply to: eodeo

I am a 3ds max user and I've a problem with Reactor physic simulator at first I want to inform you, that I've a scene in which many spheres with soft body modifier drop into a container. And it takes a lot of time to create animation. Also in case of increasing the number of spheres it dumps out of memory. Therefore I upgrade my system from Intel core2duo CPU 1.8ghz T7100 ,2 gig of memory & 32-bit operating system to core i7 920 2.66ghz ,6 gig of memory & 64-bit operating system .
My first problem is that in previous system reactor engine used 50% of CPU but in new one it uses only 12% of CPU .How can I use the whole 4core 2.66ghz CPU ?
And second one is that why the process only take 500MB of memory in case of having 5.2 G available?
I wonder if you could help to speed up the process and inform me if any other stuff such as GPU manipulates that issue. (I have Asus 9800 now and my main board is MSI X85pro)
I would be so grateful if you could respond as soon as possible
Message 19 of 19
Anonymous
in reply to: eodeo

I am a 3ds max user and I've a problem with Reactor physic simulator at first I want to inform you, that I've a scene in which many spheres with soft body modifier drop into a container. And it takes a lot of time to create animation. Also in case of increasing the number of spheres it dumps out of memory. Therefore I upgrade my system from Intel core2duo CPU 1.8ghz T7100 ,2 gig of memory & 32-bit operating system to core i7 920 2.66ghz ,6 gig of memory & 64-bit operating system .
My first problem is that in previous system reactor engine used 50% of CPU but in new one it uses only 12% of CPU .How can I use the whole 4core 2.66ghz CPU ?
And second one is that why the process only take 500MB of memory in case of having 5.2 G available?
I wonder if you could help to speed up the process and inform me if any other stuff such as GPU manipulates that issue. (I have Asus 9800 now and my main board is MSI X85pro)
I would be so grateful if you could respond as soon as possible

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Administrator Productivity


Autodesk Design & Make Report