Home High Performance Computing Introduction to Message Passing Interface - MPI

Introduction to Message Passing Interface – MPI

Message Passing Interface(MPI) is a standard library that allows us to perform parallel processing by spreading a task between multiple processors or hosts(computers). The processes in each processor execute the task and communicate the result by message passing.

The Message Passing Interface(MPI) assumes a distributed memory model and works the same when we are using one machine(multiple processor part of the same machine) or when we are using a cluster of computers.

It is important to realize that all message passing interface(MPI) defines is a model for passing messages and the definitions of several methods and data types that support that model.

MPI is language-independent, but it defines language binding for common programming languages such as C, C++, and Fortran.

Features of MPI

  1. Portability: There is no need to modify your source code when you port an application to a different platform that is compliant with the MPI standard.
  2. Standardization: MPI is the only standard message-passing library and is supported on all HPC platforms

MPI program structure

MPI program structure
MPI program structure. Source: computing.llnl.gov/tutorials/mpi/

MPI datatypes and execution routines

MPI datatypes: MPI also provides basic architecture-independent datatypes such as MPI_INTMPI_FLOAT MPI_LONG, MPI_CHAR, and MPI_DOUBLE, etc for C programs.

Refer here for a full list of MPI data types.

MPI execution Routines: MPI execution routines are a set of functions used to set up an execution environment.

MPI uses objects called communicators and groups to define which collection of processes may communicate with each other. Most MPI routines require us to specify a communicator as an argument.

Following are some most commonly used MPI routines:

 MPI_Init  – It initializes the MPI execution environment. This function is the first function that is called only once in every MPI program before performing any other MPI compliant functions.

MPI_Init (&argc,&argv)             //arguments are optional
MPI_INIT (ierr)

 MPI_Comm_size  – The function tells us about how many processes being used in a specified communicator. If the communicator is MPI_COMM_WORLD, then it represents the number of MPI tasks available to your application.

MPI_Comm_size (comm,&size)
MPI_COMM_SIZE (comm,size,ierr)

 MPI_Comm_rank  – This function returns the rank or task_id of the current process. Initially, all processes will be assigned a unique rank between 0 and the total process – 1.

MPI_Comm_rank (comm,&rank)
MPI_COMM_RANK (comm,rank,ierr)

 MPI_Finalize  – This function is used to end MPI computation and does general cleaning before the application terminates. This is the last MPI function called in every MPI program and no other functions can be called after it.

MPI_Finalize ()
This is all about the basics of Message Passing Interface(MPI). In the next lesson, we will learn how to write Hello World in MPI.

Did, we miss something, or do you want to add some other key points?🤔
Please comment. 😊

Subscribe to our weekly newsletter

Join our community of 1000+ developers and stay updated with the fast moving world of computer science

We promise, we won't spam
Even we hate spam as much as you hate them


Please enter your comment!
Please enter your name here