GROMACS 4.5.5, OpenMPI 1.6, And FFTW 3.3.2 Compilation Under Mountain Lion (OSX 10.8) With XCode (And A Little Help From Homebrew)

Minus a few glitches easily fixed with the right software, this build wasn't bad at all (and thanks to Adam Lindsay for the title catch).

Now sitting in front of a new Core i7 MacBook Pro, one of the first compilations I wanted to have finished for new projects was GROMACS 4.5.5. As my procedure for compiling GROMACS 3.3.3 had been a highly-traveled page, I wanted to provide a brief summary of my successful 4.5.5 compilation.

A Few Piece Of Info

1. XCode

This used to be disc-download and install, now it's available as a free download from the App Store (1.57 GB download, so plan to do something else while you wait for the download).

2. Homebrew

Having Homebrew installed in Mountain Lion made the installation of FFTW easy and OpenMPI trivial once gfortran was equally trivially installed. Therefore, to make your life easier, I can't recommend a Homebrew installation enough. For additional install tweaks, I followed the following page: gist.github.com/1860902

Installation Procedure

1. Download gromacs 4.5.5

…and place it in your home folder (will go to Downloads most likely, drag it to your home folder for ease of building).

2. Extract into your home holder

…with a double-click, making ~/gromacs-4.5.5.

3. brew install fftw

With the install of Homebrew, you'll simply run the following from a terminal window and produce the following output:

brew install fftw

==> Downloading http://www.fftw.org/fftw-3.3.2.tar.gz
######################################################################## 100.0%
==> ./configure --enable-single --enable-sse --enable-shared --disable-debug 
--prefix=/usr/local/Cellar/fftw/3.3.2 --enable-threads --disable-fortran
==> make install
==> make clean
==> ./configure --enable-sse2 --enable-shared --disable-debug 
--prefix=/usr/local/Cellar/fftw/3.3.2 --enable-threads --disable-fortran
==> make install
==> make clean
==> ./configure --enable-long-double --enable-shared --disable-debug 
--prefix=/usr/local/Cellar/fftw/3.3.2 --enable-threads --disable-fortran
==> make install
/usr/local/Cellar/fftw/3.3.2: 34 files, 13M, built in 2.7 minutes

4. brew install gfortran

If you don't install gfortran FIRST and try to install OpenMPI, you'll get the following error in Homebrew:

==> Downloading http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.bz2
######################################################################## 100.0%
Error: This formula requires a fortran compiler, but we could not find one by
looking at the FC environment variable or searching your PATH for `gfortran`.
Please take one of the following actions:

  - Decide to use the build of gfortran 4.2.x provided by Homebrew using
        `brew install gfortran`

  - Choose another Fortran compiler by setting the FC environment variable:
        export FC=/path/to/some/fortran/compiler
    Using an alternative compiler may produce more efficient code, but we will
    not be able to provide support for build errors.

So don't. Installing gfortran will produce the following:

brew install gfortran

==> Downloading http://r.research.att.com/tools/gcc-42-5666.3-darwin11.pkg
######################################################################## 100.0%
==> Installing gfortran 4.2.4 for XCode 4.2 (build 5666) or higher
==> Caveats
Brews that require a Fortran compiler should not use:
  depends_on 'gfortran'

The preferred method of declaring Fortran support is to use:
  def install
    ...
    ENV.fortran
    ...
  end

==> Summary
/usr/local/Cellar/gfortran/4.2.4-5666.3: 86 files, 72M, built in 39 seconds

5. brew install openmpi

This is what allows you to use all cores on your machine and is not in the default XCode install.

brew install openmpi

==> Downloading http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.bz2
Already downloaded: /Library/Caches/Homebrew/open-mpi-1.6.tar.bz2
==> Using Homebrew-provided fortran compiler.
This may be changed by setting the FC environment variable.

==> ./configure --prefix=/usr/local/Cellar/open-mpi/1.6 --enable-ipv6
==> make all
==> make install
/usr/local/Cellar/open-mpi/1.6: 674 files, 21M, built in 5.9 minutes

6. cd gromacs-4.5.5

7. ./configure –enable-float –enable-mpi

You'll produce output such as found in: 2012august29_gromacs455_configure.txt

You'll also get two odd errors at the end of the ./configure run that do not affect the rest of the procedure:

./configure --enable-float --enable-mpi

...
./configure: line 29242: sort: No such file or directory
./configure: line 29239: sed: No such file or directory

So ignore them.

NOTE: If you've been going by my 3.3.3 procedure and used…

./configure --enable-mpi --enable-double

You'll get the following error when you try to run make:

Making all in include
Making all in .
make[2]: Nothing to be done for `all-am'.
Making all in types
make[2]: Nothing to be done for `all'.

...

/bin/sh https://www.somewhereville.com/libtool --tag=CC   --mode=compile mpicc -DHAVE_CONFIG_H -I. -Ihttps://www.somewhereville.com/src -I/usr/include/libxml2 -Ihttps://www.somewhereville.com/include -DGMXLIBDIR=\"/usr/local/gromacs/share/top\"   -O3 -fomit-frame-pointer -finline-functions -Wall -Wno-unused -msse2 -funroll-all-loops -std=gnu99 -MT genborn_sse2_double.lo -MD -MP -MF .deps/genborn_sse2_double.Tpo -c -o genborn_sse2_double.lo genborn_sse2_double.c
 mpicc -DHAVE_CONFIG_H -I. -Ihttps://www.somewhereville.com/src -I/usr/include/libxml2 -Ihttps://www.somewhereville.com/include -DGMXLIBDIR=\"/usr/local/gromacs/share/top\" -O3 -fomit-frame-pointer -finline-functions -Wall -Wno-unused -msse2 -funroll-all-loops -std=gnu99 -MT genborn_sse2_double.lo -MD -MP -MF .deps/genborn_sse2_double.Tpo -c genborn_sse2_double.c  -fno-common -DPIC -o .libs/genborn_sse2_double.o
genborn_sse2_double.c:931: internal compiler error: Segmentation fault: 11
Please submit a full bug report,
with preprocessed source if appropriate.
See  for instructions.
make[3]: *** [genborn_sse2_double.lo] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all] Error 2
make: *** [all-recursive] Error 1

So don't do that, either. The proper flag is the enable-float.

8. make

This will produce the output available for download at: 2012august29_gromacs455_make.txt

9. make install

This will produce the output available for download at: 2012august29_gromacs455_make_install.txt

10. make links

This will produce the short piece of output reproduced below.

cd /usr/local/gromacs/bin && programs=`ls` && cd /usr/local/bin && \
	for i in $programs; do \
	   (test ! -f $i && ln -s /usr/local/gromacs/bin/$i . ; exit 0); \
	done

And with that, you should be able to run all programs from a terminal window.

Amber 11 And AmberTools 1.5 In Ubuntu 10.04 LTS (And Related, Including A How-To For EOL 8.10)

Having successfully navigated serial and parallel Amber10 installs under Ubuntu 8.10, I am pleased to report that the process for Amber11 with OpenMPI (from apt-get, one doesn't have to build from scratch) under Ubuntu 10.10 is seemingly much easier (and have it here so I don't forget). There is a bit of persnicketiness to the order of the serial and parallel installs that must be kept track of (and I'm building in serial-to-parallel order), but the process is otherwise straightforward.

For organizational purposes, I'm building amber11 in my $HOME directory. This removes some of the PATH issues with sudo-ing aspects of the install (and can be moved into another directory after the build is complete).

1. apt-get Installs

The search for dependent programs and libraries is a long and involved one given how many programs I have installed. Therefore, instead of trying to find all of the amber-dependent installs for successful building, I'm simply providing the list of everything I have on the test machine. As hard drives are cheap and Ubuntu will warn of conflicts, I recommend simply installing the below and accepting the 100 Mb hit to NOT have to find the smallest apt-get set (yes, some of these are obviously not needed).

sudo apt-get install build-essential cmake doxygen freeglut3-dev g++-multilib gcc-multilib gettext gnuplot ia32-libs lib32asound2 lib32gcc1 lib32gcc1-dbg lib32gfortran3 lib32gomp1 lib32mudflap0 lib32ncurses5 lib32nss-mdns lib32z1 libavdevice52 libc6-dev-i386 libc6-i386 libfreeimage-dev libglew1.5-dev libopenal1 libopenexr-dev libpng12-dev libqt4-dev libssl-dev libstdc++6-4.3-dbg libstdc++6-4.3-dev libstdc++6-4.3-doc libxi-dev libxml-simple-perl libxmu-dev mercurial nfs-common nfs-kernel-server portmap python2.6-dev rpm ssh

The above said, there are some obvious most-important installs that have to be there (according to the "official" Ubuntu amber11 install summary at ambermd.org/ubuntu.html). You could try to work with only these first if you were in a diagnostic mood today:

sudo apt-get install bison csh flex fort77 g++ gcc gfortran libbz2-dev libnetcdf-dev libopenmpi-dev libxext-dev libxt-dev openmpi-bin patch tcsh xorg-dev zlib1g-dev

With that, we move onto the AmberTools 1.5 install.

2. AmberTools 1.5 (Serial)

The AmberTools build process deals with PATH specifications for both it and Amber, then walks you through patching and a successful build.

user@machine:~$ tar xjf AmberTools-1.5.tar.bz2 
user@machine:~$ cd amber11/
user@machine:~/amber11$ echo "export AMBERHOME=$PWD" >> ~/.bashrc
user@machine:~/amber11$ echo "export PATH=$PATH:$AMBERHOME/bin" >> ~/.bashrc
user@machine:~/amber11$ source ~/.bashrc
user@machine:~/amber11$ wget http://ambermd.org/bugfixes/AmberTools/1.5/bugfix.all
user@machine:~/amber11$ patch -p0 < bugfix.all
user@machine:~/amber11$ rm bugfix.all
user@machine:~/amber11$ cd AmberTools/src/
user@machine:~/amber11/AmberTools/src$ ./configure gnu
user@machine:~/amber11/AmberTools/src$ make install
user@machine:~/amber11/AmberTools/src$ cd

3. Amber 11 (Serial Install)

For the Amber build, not building the serial version first will produce the following error (which you may or may not be searching against in google presently):

Warning: Deleted feature: PAUSE statement at (1)
cpp -traditional -P  -DBINTRAJ -DMPI    svbksb.f > _svbksb.f
mpif90 -c -O3 -mtune=generic -ffree-form   -o svbksb.o _svbksb.f
cpp -traditional -P  -DBINTRAJ -DMPI    pythag.f > _pythag.f
mpif90 -c -O3 -mtune=generic -ffree-form   -o pythag.o _pythag.f
Error: a serial version of libFpbsa.a must be built before parallel build.
make[2]: *** [libFpbsa.parallel] Error 2
make[2]: Leaving directory `/home/genomebio/amber11/AmberTools/src/pbsa'
make[1]: *** [libpbsa] Error 2
make[1]: Leaving directory `/home/genomebio/amber11/src/sander'
make: *** [parallel] Error 2

The "gnu" is also important, as there appears to be some kind of formatting (fortran-specific) issue with some files in the non-gnu build attempt that produces the following error if you just blindly run a ./configure:

Error: Unclassifiable statement at (1)
constants.f:39.1:

double precision, parameter :: two       = 2.0d0                        
 1
Error: Non-numeric character in statement label at (1)
constants.f:39.1:

double precision, parameter :: two       = 2.0d0                        
 1
Error: Unclassifiable statement at (1)
constants.f:40.1:

double precision, parameter :: three     = 3.0d0                        
 1
Error: Non-numeric character in statement label at (1)
Fatal Error: Error count reached limit of 25.
make[1]: *** [constants.o] Error 1
make[1]: Leaving directory `/home/user/amber11/src/sander'
make: *** [parallel] Error 2

With that, the serial build is below, including bug fixes.

user@machine:~$ tar xfj Amber11.tar.bz2
user@machine:~$ cd $AMBERHOME
user@machine:~/amber11$ wget http://ambermd.org/bugfixes/11.0/bugfix.all
user@machine:~/amber11$ wget http://ambermd.org/bugfixes/11.0/apply_bugfix.x
user@machine:~/amber11$ chmod +x ./apply_bugfix.x
user@machine:~/amber11$ ./apply_bugfix.x bugfix.all
user@machine:~/amber11$ cd AmberTools/src/
user@machine:~/amber11/AmberTools/src$ ./configure gnu
user@machine:~/amber11/AmberTools/src$ cd $AMBERHOME
user@machine:~/amber11$ ./AT15_Amber11.py 
user@machine:~/amber11$ cd src/
user@machine:~/amber11/src$ make serial

4. Amber 11 (Parallel)

Hopefully the serial build ran non-problematically. The parallel install works just as simply provided you run the process in the order below. The key steps are the "make clean," new ./configure, re-run of ./AT_Amber11.py, and the other "make clean."

user@machine:~/amber11/src$ cd $AMBERHOME
user@machine:~/amber11$ cd AmberTools/src/
user@machine:~/amber11/AmberTools/src$ make clean
user@machine:~/amber11/AmberTools/src$ ./configure -mpi gnu
user@machine:~/amber11/AmberTools/src$ cd $AMBERHOME
user@machine:~/amber11$ ./AT15_Amber11.py 
user@machine:~/amber11$ cd src/
user@machine:~/amber11/src$ make clean
user@machine:~/amber11/src$ make parallel

5. Amber 11 (Tests)

Finally, testing the install. Nothing specific to be done as far as the code is concerned, simply running the tests.

user@machine:~/amber11/src$ cd ..
user@machine:~/amber11$ cd test/
user@machine:~/amber11/test$ make -f Makefile
user@machine:~/amber11/test$ 

From the out-of-the-box installation above, my test results complete as follows:

365 file comparisons passed
15 file comparisons failed
0 tests experienced errors
Test log file saved as logs/test_amber_serial/2011-07-14_11-19-47.log
Test diffs file saved as logs/test_amber_serial/2011-07-14_11-19-47.diff

The failed tests include those already mentioned by the Amber developers to fail. This list is provided at the end of the AT15_Amber11.py results:

NOTE: Because PBSA has changed since Amber 11 was released, some
tests are known to fail and others are known to quit in error. These
can be safely ignored.

Tests that error: Tests in $AMBERHOME/test/sander_pbsa_frc
   Run.argasp.min    Run.dadt.min      Run.dgdc.min
   Run.lysasp.min    Run.polyALA.min   Run.polyAT.min
   Run.argasp.min    Run.dadt.min      Run.dgdc.min
   Run.lysasp.min    Run.polyALA.min   Run.polyAT.min
   Run.argasp.min    Run.dadt.min      Run.dgdc.min
   Run.lysasp.min    Run.polyALA.min   Run.polyAT.min

Tests that produce possible FAILUREs:
   cd sander_pbsa_ipb2   && ./Run.110D.min
   cd sander_pbsa_lpb    && ./Run.lsolver.min (only some of them fail here)
   cd sander_pbsa_tsr    && ./Run.tsrb.min
   cd sander_pbsa_decres && ./Run.pbsa_decres
   mm_pbsa.pl tests 02, 03, and 05

6. Quick Summary

For ease of copy-and-paste-ing, the command list is below:

apt-get

sudo apt-get install build-essential cmake doxygen freeglut3-dev g++-multilib gcc-multilib gettext gnuplot ia32-libs lib32asound2 lib32gcc1 lib32gcc1-dbg lib32gfortran3 lib32gomp1 lib32mudflap0 lib32ncurses5 lib32nss-mdns lib32z1 libavdevice52 libc6-dev-i386 libc6-i386 libfreeimage-dev libglew1.5-dev libopenal1 libopenexr-dev libpng12-dev libqt4-dev libssl-dev libstdc++6-4.3-dbg libstdc++6-4.3-dev libstdc++6-4.3-doc libxi-dev libxml-simple-perl libxmu-dev mercurial nfs-common nfs-kernel-server portmap python2.6-dev rpm ssh

sudo apt-get install bison csh flex fort77 g++ gcc gfortran libbz2-dev libnetcdf-dev libopenmpi-dev libxext-dev libxt-dev openmpi-bin patch tcsh xorg-dev zlib1g-dev

AmberTools

tar xjf AmberTools-1.5.tar.bz2 
cd amber11/
echo "export AMBERHOME=$PWD" >> ~/.bashrc
echo "export PATH=$PATH:$AMBERHOME/bin" >> ~/.bashrc
source ~/.bashrc
wget http://ambermd.org/bugfixes/AmberTools/1.5/bugfix.all
patch -p0 < bugfix.all
rm bugfix.all
cd AmberTools/src/
./configure gnu
make install
cd

Amber 11 (Serial)

tar xfj Amber11.tar.bz2
cd $AMBERHOME
wget http://ambermd.org/bugfixes/11.0/bugfix.all
wget http://ambermd.org/bugfixes/11.0/apply_bugfix.x
chmod +x ./apply_bugfix.x
./apply_bugfix.x bugfix.all
cd AmberTools/src/
./configure gnu
cd $AMBERHOME
./AT15_Amber11.py 
cd src/
make serial

Amber 11 (Parallel)

cd $AMBERHOME
cd AmberTools/src/
make clean
./configure -mpi gnu
cd $AMBERHOME
./AT15_Amber11.py 
cd src/
make clean
make parallel

Amber Tests

cd ..
cd test/
make -f Makefile

7. And Furthermore…

I tried the above on an old linux box running Intrepid Ibex (8.10), which counts as an End-Of-Life (Obsolete) version. Running all of the apt-get installs will work despite 8.10 not existing in the standard package locations, but you have to make the following addition to /etc/apt/sources.list.

sudo pico /etc/apt/sources.list

And copy-and-paste the following (this all taken from help.ubuntu.com/community/EOLUpgrades/Intrepid):

## EOL upgrade sources.list
# Required
deb http://old-releases.ubuntu.com/ubuntu/ intrepid main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ intrepid-updates main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ intrepid-security main restricted universe multiverse

Installing And Mounting Network Drives Using NFS In Ubuntu (And Generally)

This is another piece in an Ubuntu puzzle that, when assembled, will describe how to set up an MPI (message passing interface) computer cluster for running parallel calculations (upcoming).  As a brief explanation of what's going on, many of the MPI (OpenMPI, MPICH, MPICH2) set-up procedures you may stumble across online describe how to use the network file system (NFS) protocol to set up one directory on a host node (head node/server node/master node/whatever) of your cluster so that, by mounting a directory on a guest node (client node/slave node/whatever) to this network-accessible drive, the head and guest nodes all see the same work directory and executables (both MPI and your program of choice).  There are more clever ways to set the cluster up that will likely run at a slightly faster pace than NFS may allow, but we'll ignore that at the moment.  The install procedure below is Ubuntu-specific only in the apt-get stage (NFS support is not part of the default installation).  After all of the components are installed (post-apt-get), the setup should be Linux-universal.

LEGEND

Text in black – my ramblings.

Text in bold red – things you will type in the Terminal

Text in green – text you will either see or will type into files (using pico, my preference)

Below is all I need to do in Ubuntu to do what I need it to do.  I'll be dividing the installation procedure into HOST and GUEST sections for organizational purposes.

1. HOST Node Installation

a. sudo apt-get install nfs-kernel-server nfs-common portmap

My apt-get list is becoming gigantic as part of the cluster work, but I'm only focusing on NFS right now (if you intend on running any MPI-based code, you also need to include SSH), so we only need to deal with these three packages (they will also install libevent1, libgssglue1, libnfsidmap2, and librpcsecgss3, but that's the beauty of letting apt-get do the dirty work.  You can see this previous post if you intend on finding yourself installing any programs in Ubuntu while [shiver] not online).  You'll see plenty of output and, hopefully, no errors.

b. sudo dpkg-reconfigure portmap

Installing packages with NFS in the title makes sense.  What's the deal with portmap?  NFS uses remote procedure calls (RPCs) for communication.  Portmap is a server/service that maps these RPCs to their proper services (as a translator between RPC numbers and DARPA port numbers. Yup.), thereby directing cluster traffic.

The portmap configuration file (/etc/default/portmap) looks as below:

# Portmap configuration file
#
# Note: if you manually edit this configuration file,
# portmap configuration scripts will avoid modifying it
# (for example, by running 'dpkg-reconfigure portmap').

# If you want portmap to listen only to the loopback
# interface, uncomment the following line (it will be
# uncommented automatically if you configure this
# through debconf).
#OPTIONS="-i 127.0.0.1"

For the purposes of the cluster work to be done in upcoming posts, you do NOT want to uncomment the OPTIONS line (which would then bind loopback), not that you would start randomly uncommenting lines in the first place.

c. sudo /etc/init.d/portmap restart

If you make changes to the /etc/default/portmap configuration file, reconfigure portmap, etc., you'll need to restart portmap for the changes to be implemented.  It is recommended that you run this restart upon installation regardless (especially having run dpkg-reconfigure portmap above).

d. sudo mkdir /[work_directory]

This makes the directory to be shared among all of the other cluster machines.  Call it what you will.

e. sudo chmod a+wrx /[work_directory]

We now provide carte blanche to this directory so anyone can read, write, and execute programs in this directory.

f. sudo pico /etc/exports

The last file modification step on the HOST node will mount the /[work_directory] as an NFS-accessible directory to the GUEST machines and preserve this NFS accessibility until you change the settings (or the directory), preserving the accessibility upon reboot.

# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync) hostname2(ro,sync)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt)
# /srv/nfs4/homes  gss/krb5i(rw,sync)
#
/[work_directory] *(rw,sync)

To translate:  * = open up to all clients; rw = read-write priviledges to specified clients; sync = commit all changes to the disk before the server responds to some request making a change to the disk.  It is also worded as "read/write and all transfers to disk are committed to the disk before the write request by the client is completed" and "this option does not allow the server to reply to requests before the changes made by the request are written to the disk," which may or may not help to enlighten.  Basically, it makes sure the NEXT modification to some file doesn't occur until the CURRENT operation on that file is complete.

This only scratches the surface of all things /etc/exports but is enough for my purposes (anyone can mount the drive and read/write.  If your machine is online, let SSH take care of the rest of it).

g. sudo /etc/init.d/nfs-kernel-server restart

Having made the changes to /etc/exports, we restart the NFS server to commit those changes to the operating system.

h. sudo exportfs -a

If you RTFM, you know "the exportfs command is used to maintain the current table of exported file systems for NFS. This list is kept in a separate file named /var/lib/nfs/xtab which is read by mountd when a remote host requests access to mount a file tree, and parts of the list which are active are kept in the kernel's export table." (see HERE for more info).

With all of that completed, you will now have a network-accessible drive /[work_directory] sitting on the HEAD node.

Before moving on to the GUEST nodes, now what?  Again, based on other MPI and cross-network installation descriptions, a completely reasonable thing to do is build ALL of your cluster-specific programs (MPI and calculation programs) into /[work_directory].  You will then mount this directory on the client machines and set the PATHs on these machines to include /[work_directory]. Everyone then sees the same programs.

2. GUEST Node Installation

a. sudo apt-get install portmap nfs-common

This installs portmap and the NFS command files (but not the server) and also "starts up" the client NFS tools.  You should be ready to mount network drives immediately.

b. sudo mkdir /[work_directory]

We make the directory that will have the network drive mounted (the same name as the directory on the HOST node).  I've placed it in the same location in the directory hierarchy as it exists on the HEAD node (sitting right in /, not in /mnt or whatever).

For immediate access -> c1. sudo mount HOST_MACHINE:/[work_directory] /[work_directory]

Here, HOST_MACHINE may be an IP address (often something like 192.168.nn.nn or 10.1.nn.nn if you've set up your cluster on a switch, the actual IP address for the HOST machine if you know it (type ifconfig on the HOST machine to find out)) or a domain name (head.campus.edu, for instance).  It's that simple (if everything installed properly).

For long-term, automated access -> c2a. sudo pico /etc/fstab

If you want to make this connection permanent (and this is a very good idea on a cluster), you can modify /etc/fstab by adding the following line:

HOST_MACHINE:/[work_directory] /[work_directory] nfs   rw   0   0

Then type

c2b. sudo mount /[work_directory]

As for additional information and discussion, there is quite a bit online already (as you might expect), with a long Ubuntu thread on the subject at ubuntuforums.org/showthread.php?t=249889.  For a bit more technical information on the subject, check out www.troubleshooters.com/linux/nfs.htm.

www.ubuntu.com
en.wikipedia.org/wiki/Message_Passing_Interface
www.open-mpi.org
www.mcs.anl.gov/research/projects/mpi/mpich1
www.mcs.anl.gov/research/projects/mpich2
en.wikipedia.org/wiki/Network_file_system
www.debian.org/doc/manuals/apt-howto
en.wikipedia.org/wiki/Secure_Shell
www.somewhereville.com/?p=616
en.wikipedia.org/wiki/Remote_procedure_call
www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-server-config-exports.html
www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-nfs-server-export.html
linux.about.com/library/cmd/blcmdl8_exportfs.htm
ubuntuforums.org/showthread.php?t=249889
www.troubleshooters.com/linux/nfs.htm