18. How do I launch parallel jobs using MPI?

- I followed this tutorial http://mpitutorial.com/tutorials/mpi-hello-world/
- First make sure you can login with out password between the nodes.
$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/USER/jny25782/.ssh/id_rsa): Created directory '/home/USER/jny25782/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/USER/jny25782/.ssh/id_rsa. Your public key has been saved in /home/USER/jny25782/.ssh/id_rsa.pub. The key fingerprint is: 55:e3:ae:d5:90:49:00:ab:cd:5c:4c:cf:e1:80:b9:66 jny25782@fbv-n67 The key's randomart image is: +--[ RSA 2048]----+ | .++.= | | o+ O = | | ..+ O | | =Eo . o | | .oS o . | | o | | . | | | | | +-----------------+ $ cp .ssh/id_rsa.pub .ssh/authorized_keys $ _
- Create the host_file with the name of the computers you wish to run on.
fbv-n65 fbv-n67
- Make sure you have the host keys in your ~/.ssh/known_hosts file !
$ for i in fbv-n65 fbv-n67 ; do ssh -o StrictHostKeyChecking=no $i hostname ; done Warning: Permanently added 'fbv-n65,192.168.0.165' (ECDSA) to the list of known hosts. fbv-n65 Warning: Permanently added 'fbv-n67,192.168.0.167' (ECDSA) to the list of known hosts. fbv-n67 $ _
- Create the Makefile
EXECS=mpi_hello_world MPICC?=mpicc all: ${EXECS} mpi_hello_world: mpi_hello_world.c ${MPICC} -o mpi_hello_world mpi_hello_world.c clean: rm ${EXECS}
- Create the mpi_hello_world.c
// Author: Wes Kendall // Copyright 2011 www.mpitutorial.com // This code is provided freely with the tutorials on mpitutorial.com. Feel // free to modify it for your own use. Any distribution of the code must // either provide a link to www.mpitutorial.com or keep this header intact. // // An intro MPI hello world program that uses MPI_Init, MPI_Comm_size, // MPI_Comm_rank, MPI_Finalize, and MPI_Get_processor_name. // #include
#include int main(int argc, char** argv) { // Initialize the MPI environment. The two arguments to MPI Init are not // currently used by MPI implementations, but are there in case future // implementations might need the arguments. MPI_Init(NULL, NULL); // Get the number of processes int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); // Get the rank of the process int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); // Get the name of the processor char processor_name[MPI_MAX_PROCESSOR_NAME]; int name_len; MPI_Get_processor_name(processor_name, &name_len); // Print off a hello world message printf("Hello world from processor %s, rank %d out of %d processors\n", processor_name, world_rank, world_size); // Finalize the MPI environment. No more MPI calls can be made after this MPI_Finalize(); } - Build with make
$ make mpicc -o mpi_hello_world mpi_hello_world.c mpi_hello_world.c: In function main: mpi_hello_world.c:39:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ $ _
- Run your job:
$ mpirun -n 2 -f host_file ./mpi_hello_world Hello world from processor fbv-n65, rank 0 out of 2 processors Hello world from processor fbv-n67, rank 1 out of 2 processors $ _
- Launch more on each node by adding rows to the host_file:
fbv-n65:1 fbv-n65:2 fbv-n65:3 fbv-n67:1 fbv-n67:2 fbv-n67:3 fbv-n67:4 fbv-n67:5
- Launch:
$ mpirun -n 8 -f host_file ./mpi_hello_world Hello world from processor fbv-n65, rank 2 out of 8 processors Hello world from processor fbv-n65, rank 3 out of 8 processors Hello world from processor fbv-n65, rank 4 out of 8 processors Hello world from processor fbv-n65, rank 5 out of 8 processors Hello world from processor fbv-n65, rank 0 out of 8 processors Hello world from processor fbv-n65, rank 1 out of 8 processors Hello world from processor fbv-n67, rank 6 out of 8 processors Hello world from processor fbv-n67, rank 7 out of 8 processors $ _