At the moment it's not possible to run one analysis spread over many computers (aka distributed Multi-processing or DMP). You can use multiple cores on the same machine for most solvers now.
The DMP is a technology that sounds very interesting on the surface, but can be very inefficient for time based, iterative processes like we have with Moldflow. The way DMP works is that is breaks the model up in different 'blocks', and each 'block' is calculated by a unique computer. After every iterations and time step, the different computers have to synchronize and share where they 'ended up' in the previous analysis step. Then the next analysis step has to be done and a re-synchronization has to take place again., etc. This data communication after every step very inefficient.
Another hurdle is that the machine that it's very unlikely that all the machines complete a time step at the same time. Until the last machine has completed the time step ... all other machines sit idle.
So, on the surface this sounds great, but the efficiency gains would be very disappointing. Having said that, this is still something that is continuously on our radar. DMP requires code to be developed from the ground up and is not something that can be quickly be retro-fitted to the existing code base.
Product Manager for Autodesk Simulation Moldflow products
I understand, thank you very much Hanno. I ask this question because I need to run a 3D analysis, which always throws me the error "Out of memory". I applied some modifications to resolve this error: I used the tool "Scale" to reduce the size of the piece and I have reduced the mesh density, and although it has improved the problem, although I still can not make a complete fill-pack-warp analysis. I am doing simulations of gas-assisted injection.