I wanted to make this post on here to talk about the GPU solving potential in finite element. Recently, we as a company have been creating specifications for a new, high powered computer to run very large dynamic MES simulations that take 52 hours to run on a 10 core CPU machine. One of the very first areas to look in to for pure solving speed and power is GPU processing. I have been speaking with Autodesk regarding this over the course of the last few weeks, and after asking for performance data using the latest Tesla cards and their impact on solver time in the AMG-MF solver it came to my attention that no performance data exists anywhere. In searching for performance results from the Mold Flow version of the solver, I was able to find a very small amount of data showing performance gains with and without a GPU as a function of increasing number of cores. This data was using a now very outdated GPU. But, more importantly, in my search I found out that the GPU solver was removed from the latest release in Mold Flow. In discussing this with the Autodesk representative, he said the following: "Now to the topic of GPU. I have no performance comparison test results available to me. I got no indication that any further development is going into the GPU technology. In fact, as the link you sent me suggests, it's quite the opposite. We are not offering that in Moldflow. Very few users took advantage of it to justify continued development in that area. As such, I would not recommend going the GPU route." As a company the key thing is, very obviously, profit. As a user of simulation mechanical who has important runs that can take 50+ hours to run, I am looking to get a computer that will limit the solution time as much as possible to speed up results for finite element driven design. Anyone with access to the internet can find out the potentially massive benefits of GPU accelerated models. I can quite easily find data showing significant simulation speed ups using GPU accelerated technology. Why then would there be so few companies willing to invest a relatively trivial amount of money in to a GPU to see potentially massive benefits in solve time? These GPU cards would pay for themselves in such a short time that not buying one seems ridiculous. At even a conservative 10% reduction in solve time, for our ocmpany 13 simulation runs would have the card paying for itself. I might be doing that many every single month. The main problem I see here is that Autodesk doesn't seem to be developing GPU solving in any fashion at all, and all of my performance benefits of GPU accelerated finite element comes from a heavily promoted partnership between nvidia and ANSYS where there is a ton of marketing and performance data to make users aware of the benefits of GPU processing. If people don't know about the benefits of GPU solving, why would they spend any money at all? GPU solving through Autodesk seems to be lagging severely behind, and any marketing/promotion of the technology leaves users completely oblivious to what could be. The only logical explanation to why no ones takes advantage of the GPU acceleration is because there is no promotion of its benefits at all. I'm kind of incredulous the Autodesk seems to be going in the complete opposite direction of devloping solvers to utilize GPU acceleration in order to keep up with what appears to be one of the next major improvements in FEA to these very large models. Having GPU acceleration actively developed would be a direction I would love to see Autodesk go.
Show More