MPI using Fortran
Contents
This is a short introduction to the installation and operation (in Fortran) of the Message Passing Interface (MPI) on the Ubuntu.
1. What is MPI?
MPI(wiki) is a library of routines that can be used to create parallel programs in Fortran77 and C Fortran, C, and C++. Standard Fortran, C and C++ include no constructs supporting parallelism so vendors have developed a variety of extensions to allow users of those languages to build parallel applications. The result has been a spate of non-portable applications, and a need to retrain programmers for each platform upon which they work.
MPI is designed to allow users to create programs that can run efficiently on most parallel architectures. MPI can also support distributed program execution on heterogenous hardware. That is, you may run a program that starts processes on multiple computer systems to work on the same problem. This is useful with a workstation farm.
All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK) have an additional argument ierr
at the end of the argument list. ierr
is an integer and has the same meaning as the return value of the routine in C. In Fortran, MPI routines are subroutines, and are invoked with the call statement. All MPI objects (e.g., MPI_Datatype, MPI_Comm) are of type INTEGER
in Fortran.
View number of physical CPU:
$ grep 'physical id' /proc/cpuinfo | sort -u | wc -l
View number of Core:
$ grep 'core id' /proc/cpuinfo | sort -u | wc -l
View number of process:
$ grep 'processor' /proc/cpuinfo | sort -u | wc -l
2. Installation
Go to mpich.org, and download the newest vision of mpich. Here is mpich-3.2.1 (stable release)
. Enter into the diretory where mpich tar.gz was saved.
#[note: Unpack the tar file]
$ tar xvf mpich-3.2.1.tar.gz
#[note: The directory MPI will now contain a sub-directory mpich-3.2.1]
....
$ ls mpich-3.2.1
$ cd mpich-3.2.1
#[note: Configure mpich-3.2.1, '/usr/local/mpich' is my installation path]
$ ./configure --prefix=/usr/local/mpich
....
#[note: Build mpich-3.2.1]
$ make
....
#[note: Install the MPICH3 commands]
$ sudo make install
....
#[note: Add the bin directory to the path]
$ sudo echo "export PATH=/usr/local/mpich/bin:$PATH" >> ~/.bashrc
#[note: preserve]
$ source ~/.bashrc
#[note:test if installed or not]
$ mpirun -np 10 ./examples/cpi
3. Hello world
Here is the basic Hello world program in Fortran 77 using MPI:
# hello_world.f
program hello_world
include "mpif.h" ! Header File, required for all programs that make MPI library calls.
integer ierr
call MPI_INIT ( ierr ) ! MPI Calls, Initializes the MPI execution environment.
print *, "Hello world"
call MPI_FINALIZE ( ierr ) ! Terminates the MPI execution environment.
stop
end
Compile hello_world.f
with a command like:
$ mpif77 -o hello_world.out hello_world.f
an executable file called hello_world.out
is created. Then, execute by using the mpirun
command as in the following session segment:
$ mpirun -np 4 ./hello_world.out
Hello world
Hello world
Hello world
Hello world
$
When the program starts, it consists of only one process, sometimes called the parent
or root
process. When the routine MPI_INIT
executes within the root process, it causes the creation of 3 additional process (to reach the number of process (np)
specified on the mpirun command line), sometimes called child" processes
.
Each of the processes then continues executing separate versions of the hello world program. The next statement in every program is the print statement, and each process prints Hello world
as directed. Since terminal output from every program will be directed to the same terminal, we see four lines saying Hello world
.
4. Identifying
As written, we cannot tell which Hello world
line was printed by which process. To identify a process we need some sort of process ID and a routine that lets a process find its own process ID. MPI assigns an integer to each process beginning with 0
for the parent process
and incrementing each time a new process is created. A process ID is also called its rank
. MPI also provides routines that let the process determine its process ID, as well as the number of processes that are have been created.
Here is an example that identifies the process (See cpu_time.):
# process_1.f
program Process1
include "mpif.h"
integer my_id, root_process, ierr, status(MPI_STATUS_SIZE)
integer num_procs
call cpu_time(start)
call MPI_INIT ( ierr )
call MPI_COMM_RANK (MPI_COMM_WORLD, my_id, ierr) ! Determines the rank of the calling process in the communicator
call MPI_COMM_SIZE (MPI_COMM_WORLD, num_procs, ierr) ! Determines the size of the group associated with a communicator
write(*,'(a16,i2,a7,i2,a10)') "Hello world! I'm process ", my_id,
& "out of", num_procs, " processes."
call MPI_FINALIZE ( ierr )
call cpu_time(finish)
write(*,'(a6,i2,a17,f8.5,a8)') "Process", my_id,
&"operation time :", finish-start, "seconds"
stop
end
Compile and run this program:
$ mpif77 -o process_1.out process_1.f
$ mpirun -np 4 ./process_1.out
Hello world! I'm 0 out of 4 processes
Hello world! I'm 1 out of 4 processes
Hello world! I'm 2 out of 4 processes
Hello world! I'm 3 out of 4 processes
Proces 1 operation time : 0.00383 seconds
Proces 2 operation time : 0.00365 seconds
Proces 3 operation time : 0.00341 seconds
Proces 0 operation time : 0.00409 seconds
$
5. Different task
To let each process perform a different task, and then gathers distinct messages from each task in the group to a single destination task, we give a program structure like:
# process_2.f
program Process2
include "mpif.h"
integer my_id, root_process, ierr, status(MPI_STATUS_SIZE)
integer num_procs, num_cyc, fl(500), fl_all(500)
parameter(n_value=100)
call cpu_time(start)
call MPI_INIT ( ierr )
call MPI_COMM_RANK (MPI_COMM_WORLD, my_id, ierr)
call MPI_COMM_SIZE (MPI_COMM_WORLD, num_procs, ierr)
num_cyc = n_value/num_procs
mytaskid1 = num_cyc*my_id + 1
mytaskid2 = num_cyc*(my_id+1)
do i = mytaskid1, mytaskid2
fl(i) = i
enddo
call MPI_GATHER (fl(mytaskid1), num_cyc, mpi_int,
& fl_all(mytaskid1), num_cyc, mpi_int,
& 0, MPI_COMM_WORLD, ierr) ! Gathers together values from a group of processes
do i = 1, n_value
if (my_id .eq. 0) then
write(*,*) fl_all(i)
endif
enddo
call MPI_FINALIZE ( ierr )
call cpu_time(finish)
write(*,'(f6.4)') finish-start
stop
end
Compile and run the program:
$ mpif77 -o process_2.out process_2.f
$ mpirun -np 4 ./process_2.out
1
2
...
...
99
100
0.0182
0.0166
0.0056
0.0179
$
References:
Author Qiang
LastMod 2019-04-07