Message Passing Interface(MPI) is a standard library that allows us to perform parallel processing by spreading a task between multiple processors or hosts(computers). The processes in each processor execute the task and communicate the result by message passing.
The Message Passing Interface(MPI) assumes a distributed memory model and works the same when we are using one machine(multiple processor part of the same machine) or when we are using a cluster of computers.
It is important to realize that all message passing interface(MPI) defines is a model for passing messages and the definitions of several methods and data types that support that model.
MPI is language-independent, but it defines language binding for common programming languages such as C, C++, and Fortran.
Features of MPI
- Portability: There is no need to modify your source code when you port an application to a different platform that is compliant with the MPI standard.
- Standardization: MPI is the only standard message-passing library and is supported on all HPC platforms
MPI program structure
MPI datatypes and execution routines
MPI datatypes: MPI also provides basic architecture-independent datatypes such as MPI_INT, MPI_FLOAT MPI_LONG, MPI_CHAR, and MPI_DOUBLE, etc for C programs.
Refer here for a full list of MPI data types.
MPI execution Routines: MPI execution routines are a set of functions used to set up an execution environment.
MPI uses objects called communicators and groups to define which collection of processes may communicate with each other. Most MPI routines require us to specify a communicator as an argument.
Following are some most commonly used MPI routines:
MPI_Init – It initializes the MPI execution environment. This function is the first function that is called only once in every MPI program before performing any other MPI compliant functions.
MPI_Init (&argc,&argv) //arguments are optional MPI_INIT (ierr)
MPI_Comm_size – The function tells us about how many processes being used in a specified communicator. If the communicator is MPI_COMM_WORLD, then it represents the number of MPI tasks available to your application.
MPI_Comm_size (comm,&size) MPI_COMM_SIZE (comm,size,ierr)
MPI_Comm_rank – This function returns the rank or task_id of the current process. Initially, all processes will be assigned a unique rank between 0 and the total process – 1.
MPI_Comm_rank (comm,&rank) MPI_COMM_RANK (comm,rank,ierr)
MPI_Finalize – This function is used to end MPI computation and does general cleaning before the application terminates. This is the last MPI function called in every MPI program and no other functions can be called after it.
MPI_Finalize () MPI_FINALIZE (ierr)
Please comment. 😊