Can someone explain what is happening here. ( i7-6700,32gig,Nvidi gtx 745)
Using any NT toolpath, vortex, or surface milling, trying to 3d simulate is an effort in futility. I tried to 3d simulate a simple side feature using NT spiral and it took probably 20+ minutes to get through. Running in centerline is pretty much instant. During the 3d simulation the video card was barely being used, even though "use graphics hardware" is checked.
In my mind in both cases the toolpath has to be calculated and the only difference being that 3d will show you a visual representation of that. That being said, why is there no actual graphics hardware usage? You can watch it utilize the processor for everything.
Oddly I can run the same simulation at home on my personal machine (i7m,32gig, RTX 3060 ) and its much much quicker. Granted its a much better graphics card, but if the hardware isn't being used then? ( I will have to update this when I get home today, I can't remember what it shows regarding graphics card usage.) Either way, it offers no real answer. Either for some reason it uses the 3060 but not the 745, which would be odd. ( Both have the most recent updated drivers. Both run win10. Both are running the newest release of FC.) Side note, it feels much much slower in the 2025 release than it did previously.
Is there a way to run the centerline, save the math, then just run a 3d where it doesn't have to calculate the toolpath again and just display it? Not sure if that would even help, but it was a thought. Also is there a list of "approved" cards? There is no featurecam category on the autodesk certified graphics hardware page. Is better graphics support something that is on the roadmap? This is a 3d software, so making use of actual graphics hardware would make sense.