This is another piece in an Ubuntu puzzle that, when assembled, will describe how to set up an MPI (message passing interface) computer cluster for running parallel calculations (upcoming). As a brief explanation of what's going on, many of the MPI (OpenMPI, MPICH, MPICH2) set-up procedures you may stumble across online describe how to use the network file system (NFS) protocol to set up one directory on a host node (head node/server node/master node/whatever) of your cluster so that, by mounting a directory on a guest node (client node/slave node/whatever) to this network-accessible drive, the head and guest nodes all see the same work directory and executables (both MPI and your program of choice). There are more clever ways to set the cluster up that will likely run at a slightly faster pace than NFS may allow, but we'll ignore that at the moment. The install procedure below is Ubuntu-specific only in the apt-get stage (NFS support is not part of the default installation). After all of the components are installed (post-apt-get), the setup should be Linux-universal.
LEGEND
Text in black – my ramblings.
Text in bold red – things you will type in the Terminal
Text in green – text you will either see or will type into files (using pico, my preference)
Below is all I need to do in Ubuntu to do what I need it to do. I'll be dividing the installation procedure into HOST and GUEST sections for organizational purposes.
1. HOST Node Installation
a. sudo apt-get install nfs-kernel-server nfs-common portmap
My apt-get list is becoming gigantic as part of the cluster work, but I'm only focusing on NFS right now (if you intend on running any MPI-based code, you also need to include SSH), so we only need to deal with these three packages (they will also install libevent1, libgssglue1, libnfsidmap2, and librpcsecgss3, but that's the beauty of letting apt-get do the dirty work. You can see this previous post if you intend on finding yourself installing any programs in Ubuntu while [shiver] not online). You'll see plenty of output and, hopefully, no errors.
b. sudo dpkg-reconfigure portmap
Installing packages with NFS in the title makes sense. What's the deal with portmap? NFS uses remote procedure calls (RPCs) for communication. Portmap is a server/service that maps these RPCs to their proper services (as a translator between RPC numbers and DARPA port numbers. Yup.), thereby directing cluster traffic.
The portmap configuration file (/etc/default/portmap) looks as below:
# Portmap configuration file
#
# Note: if you manually edit this configuration file,
# portmap configuration scripts will avoid modifying it
# (for example, by running 'dpkg-reconfigure portmap').
# If you want portmap to listen only to the loopback
# interface, uncomment the following line (it will be
# uncommented automatically if you configure this
# through debconf).
#OPTIONS="-i 127.0.0.1"
For the purposes of the cluster work to be done in upcoming posts, you do NOT want to uncomment the OPTIONS line (which would then bind loopback), not that you would start randomly uncommenting lines in the first place.
c. sudo /etc/init.d/portmap restart
If you make changes to the /etc/default/portmap configuration file, reconfigure portmap, etc., you'll need to restart portmap for the changes to be implemented. It is recommended that you run this restart upon installation regardless (especially having run dpkg-reconfigure portmap above).
d. sudo mkdir /[work_directory]
This makes the directory to be shared among all of the other cluster machines. Call it what you will.
e. sudo chmod a+wrx /[work_directory]
We now provide carte blanche to this directory so anyone can read, write, and execute programs in this directory.
f. sudo pico /etc/exports
The last file modification step on the HOST node will mount the /[work_directory] as an NFS-accessible directory to the GUEST machines and preserve this NFS accessibility until you change the settings (or the directory), preserving the accessibility upon reboot.
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync) hostname2(ro,sync)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt)
# /srv/nfs4/homes gss/krb5i(rw,sync)
#
/[work_directory] *(rw,sync)
To translate: * = open up to all clients; rw = read-write priviledges to specified clients; sync = commit all changes to the disk before the server responds to some request making a change to the disk. It is also worded as "read/write and all transfers to disk are committed to the disk before the write request by the client is completed" and "this option does not allow the server to reply to requests before the changes made by the request are written to the disk," which may or may not help to enlighten. Basically, it makes sure the NEXT modification to some file doesn't occur until the CURRENT operation on that file is complete.
This only scratches the surface of all things /etc/exports but is enough for my purposes (anyone can mount the drive and read/write. If your machine is online, let SSH take care of the rest of it).
g. sudo /etc/init.d/nfs-kernel-server restart
Having made the changes to /etc/exports, we restart the NFS server to commit those changes to the operating system.
h. sudo exportfs -a
If you RTFM, you know "the exportfs command is used to maintain the current table of exported file systems for NFS. This list is kept in a separate file named /var/lib/nfs/xtab which is read by mountd when a remote host requests access to mount a file tree, and parts of the list which are active are kept in the kernel's export table." (see HERE for more info).
With all of that completed, you will now have a network-accessible drive /[work_directory] sitting on the HEAD node.
Before moving on to the GUEST nodes, now what? Again, based on other MPI and cross-network installation descriptions, a completely reasonable thing to do is build ALL of your cluster-specific programs (MPI and calculation programs) into /[work_directory]. You will then mount this directory on the client machines and set the PATHs on these machines to include /[work_directory]. Everyone then sees the same programs.
2. GUEST Node Installation
a. sudo apt-get install portmap nfs-common
This installs portmap and the NFS command files (but not the server) and also "starts up" the client NFS tools. You should be ready to mount network drives immediately.
b. sudo mkdir /[work_directory]
We make the directory that will have the network drive mounted (the same name as the directory on the HOST node). I've placed it in the same location in the directory hierarchy as it exists on the HEAD node (sitting right in /, not in /mnt or whatever).
For immediate access -> c1. sudo mount HOST_MACHINE:/[work_directory] /[work_directory]
Here, HOST_MACHINE may be an IP address (often something like 192.168.nn.nn or 10.1.nn.nn if you've set up your cluster on a switch, the actual IP address for the HOST machine if you know it (type ifconfig on the HOST machine to find out)) or a domain name (head.campus.edu, for instance). It's that simple (if everything installed properly).
For long-term, automated access -> c2a. sudo pico /etc/fstab
If you want to make this connection permanent (and this is a very good idea on a cluster), you can modify /etc/fstab by adding the following line:
HOST_MACHINE:/[work_directory] /[work_directory] nfs rw 0 0
Then type
c2b. sudo mount /[work_directory]
As for additional information and discussion, there is quite a bit online already (as you might expect), with a long Ubuntu thread on the subject at ubuntuforums.org/showthread.php?t=249889. For a bit more technical information on the subject, check out www.troubleshooters.com/linux/nfs.htm.
www.ubuntu.com
en.wikipedia.org/wiki/Message_Passing_Interface
www.open-mpi.org
www.mcs.anl.gov/research/projects/mpi/mpich1
www.mcs.anl.gov/research/projects/mpich2
en.wikipedia.org/wiki/Network_file_system
www.debian.org/doc/manuals/apt-howto
en.wikipedia.org/wiki/Secure_Shell
www.somewhereville.com/?p=616
en.wikipedia.org/wiki/Remote_procedure_call
www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-server-config-exports.html
www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-nfs-server-export.html
linux.about.com/library/cmd/blcmdl8_exportfs.htm
ubuntuforums.org/showthread.php?t=249889
www.troubleshooters.com/linux/nfs.htm