Hello all
I have been testing the GPU version of the AMG solver released in LABS tech preview.
This works ok for simple models, and failes for complex models.
However even when it works it is far slower than the SPASE solver for the models that I am solving.
Is autodesk working on a GPU version of the SPARSE solver? I think this is the way of the future and should be orders of magnitude faster when using powerfull CUDA cards.
Also a GPU (or at least multi threaded) version of the mesh engine would be VERY benificial. It currently takes my about twice as long to mesh my models as it does to solve them.
Cheers
Matt
Although GPU can provide higher raw computational throughput comparing to a multiple core CPU, the overall performance depends on many factors. One of the most importatnt thing is whether the algorithm to be implemented can take the advatage of hunders of parallel threads consistantly for some computing period.
The size of the model also matters. What is the size of your testing model?