Message Passing Interface, a standard for communication among networked computers to facilitate parallel computation


Numerical integration problems can often be parallelized to share the computation among multiple processors. If the integration is over a box-shaped region of space, each processor can handle a box-shaped sub-region, passing boundary values to its neighbors when needed. Parallelization yields an overall speed-up by a factor that approaches the number of processors involved.

The MPI standard describes exactly how data is passed between processors. An implementation like Open MPI provides the code for the send/receive routines and wrapper compilers required to produce MPI-capable executables. A parallel job can then be submitted to run on a cluster. The cluster's job scheduler (e.g., Slurm) will assign a number of processors and allow them to communicate via MPI while the job is running.