Running (Only) A Single-Point Energy Calculation In Crystal06/09, Proper Input Format For Long-Range Dispersion Contributions In Crystal09, And Removing The MPICH2 Content From The Output File In Pcrystal

Now enjoying the benefits of dispersion-corrected solid-state density functional theory (and a proper MPICH2 implementation for infrared intensity calculations, although this now a problem for reasons to be addressed in an upcoming post) in Crystal09, three issues in recent calculations caused me to think hard enough about keyword formats and job runs that I have opted to post briefly about what to do in case google and bing are your preferred methods of manual searching.

1. How To Run Only A Single-Point Energy Calculation In Crystal06/Crystal09

This had never come up before and, by the time I needed to find an input file to see what do to, the first google search provided Civalleri's Total Energy Calculation page that currently has broken links to .zip files. There is quite a bit about the different geometry optimization approaches in the manual, but a search for "single-point" provides no information about what to do for only single-point energy calculations.

The solution, it should be obvious after, is simply to not include the geometry optimization section in the input file. What would otherwise be the following (with arbitrary geometry optimization-like info between [COORDINATES] and [BASIS SETS]…

[COORDINATES]
OPTGEOM
TOLDEG
0.000005
TOLDEX
0.000020
END
END
[BASIS SETS]

becomes…

[COORDINATES]
[BASIS SETS]

One problem solved by simply not having any optimization parameters (again, makes sense and is now google-able).

2. Proper GRIMME Input Format For Long-Range Dispersion Contributions In Crystal09

This is another example where one's first efforts in translating the manual into calculations may lead to considerable confusion until the proper format is finally identified (by which time you've run many pruned-down input tests).

GRIMME
1.05 20. 25.
1.05 20. 25. s6 (scaling factor) d (steepness) Rcut (cutoff radius)
5
1  0.14 1.001 Hydrogen Conventional Atomic number , C6 , Rvdw
6  1.75 1.452 Carbon Conventional Atomic number , C6 , Rvdw
7  1.23 1.397 Nitrogen Conventional Atomic number , C6 , Rvdw
8  0.70 1.342 Oxygen Conventional Atomic number , C6 , Rvdw
17 5.07 1.639 Chlorine Conventional Atomic number , C6 ,'Rvdw

I'm not even sure where the final ,'Rvdw comes from. Your .out file may terminate with the following error (or something similar)…

rank 7 in job 8  korterquad_51438   caused collective abort of all ranks
  exit status of rank 7: killed by signal 9

And the ERROR.peN file with any content will show the following, clearly pointing to a GRIMME-specific error…

 ERROR **** GRIMME_INPUT **** ELEMENT NOT DEFINED:           1

The problem is the additional content within the manual pages for the GRIMME keyword that require pruning (or, at least, some identifier to show what is and what is not needed). The proper GRIMME section above is properly provided in the INPUT file as…

GRIMME
1.05 20. 25.
5
1  0.14 1.001
6  1.75 1.452
7  1.23 1.397
8  0.70 1.342
17 5.07 1.639

Where (see page 88 of the Crystal09 manual)…

GRIMME <- keyword is called
1.05 20. 25. <- scaling factor, steepness, cutoff distance
5 <- number of elements in the list (not the total number of atoms)
1  0.14 1.001 <- atomic number, dispersion coefficient, van der Waals radius
...

When all is properly run, the bottom of your output file will look something like the following:

 CYC  43 ETOT(AU) -5.784662098123E+03 DETOT  1.18E-11 tst  8.17E-15 PX  6.73E-08

 == SCF ENDED - CONVERGENCE ON ENERGY      E(AU) -5.7846620981229E+03 CYCLES  43

 ENERGY EXPRESSION=HARTREE+FOCK EXCH*0.20000+(BECKE  EXCH)*0.80000+LYP    CORR

 TOTAL ENERGY(DFT)(AU)( 43) -5.7846620981229E+03 DE 1.2E-11 tester 8.2E-15
 TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT EDFT        TELAPSE     4705.82 TCPU     4651.41

 *******************************************************************************

 GRIMME DISPERSION ENERGY CORRECTION

 SCALE FACTOR (S6):     1.0500

 GRIMME DISPERSION ENERGY (AU) -1.9723347118951E-01
 TOTAL ENERGY + DISP (AU) -5.7848593315941E+03

 *******************************************************************************

The Crystal09 manual refers you to Table 1 of the Stefan Grimme paper, "Semiempirical GGA-type density functional constructed with a long-range dispersion correction" (Journal of Computational Chemistry, Volume 27, Issue 15, Pages 1787 – 1799), which I've put together into the proper format below. Be sure to (1) delete the elements in parentheses ( -> get rid of the (H) <- ), (2) remove those atoms you do not need, (3) be sure to change the "number of elements" number for your structure, and (4) get and reference the Grimme paper so you have the proper C6 parameters and van der Waals radii accounted for (you'll be the right nitwit if I mis-copied something and you ran with it (although I trust my input), and you should have the reference regardless).

( H)   1   0.14 1.001
(Li)   3   1.61 0.825
(Na)  11   5.71 1.144
( K)  19  10.80 1.485
(Rb)  37  24.67 1.628
(Be)   4   1.61 1.408
(Mg)  12   5.71 1.364
(Ca)  20  10.80 1.474
(Sr)  38  24.67 1.606
( B)   5   3.13 1.485
(Al)  13  10.79 1.639
(Ga)  31  16.99 1.650
(In)  49  37.32 1.672
( C)   6   1.75 1.452
(Si)  14   9.23 1.716
(Ge)  32  17.10 1.727
(Sn)  50  38.71 1.804
( N)   7   1.23 1.397
( P)  15   7.84 1.705
(As)  33  16.37 1.760
(Sb)  51  38.44 1.881
( O)   8   0.70 1.342
( S)  16   5.57 1.683
(Se)  34  12.64 1.771
(Te)  52  31.74 1.892
( F)   9   0.75 1.287
(Cl)  17   5.07 1.639
(Br)  35  12.47 1.749
( I)  53  31.50 1.892
(He)   2   0.08 1.012
(Ne)  10   0.63 1.243
(Ar)  18   4.61 1.595
(Kr)  36  12.01 1.727
(Xe)  54  29.99 1.881
Y-Cd      24.67 1.639
Sc-Zn     10.80 1.562

Note that the d-block is identical for each row (so no atom numbers provided).

3. Removing The MPICH2 Content From The Output File In Pcrystal(/09)

This final issue does not occur in Pcrystal(/06) but does in Pcrystal(/09), with the reason being (I assume) the new use of MPICH2 in Pcrystal(/09) instead of MPICH in Pcrystal(/06).  The problem comes from running the following set of commands at the terminal window in MPICH2:

mpiexec -machinefile machine -np N /path/to/Pcrystal &>FILENAME.out &

Embedded within the FILENAME.out file will be all flavors of MPI-specific output, perhaps such as the following (in this case errors, but it happens in proper output as well):

application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7
rank 7 in job 9  korterquad_51438   caused collective abort of all ranks
 exit status of rank 7: return code 1 
rank 4 in job 9  korterquad_51438   caused collective abort of all ranks
 exit status of rank 4: killed by signal 9 

or…

mpiexec_machine (handle_stdin_input 1089): stdin problem; if pgm is run in background...
mpiexec_machine (handle_stdin_input 1090):     e.g.: mpiexec -n 4 a.out < /dev/null &

The solution is to break up the mpiexec output from the Pcrystal output, performed by directing the mpiexec-specific content to, in this case, /dev/null (because it is not necessary except for diagnostic purposes).

mpiexec -machinefile machine -np N /path/to/Pcrystal < /dev/null &>FILENAME.out &

Which removes all traces of mpi-specific output from FILENAME.out.

Installing And Mounting Network Drives Using NFS In Ubuntu (And Generally)

This is another piece in an Ubuntu puzzle that, when assembled, will describe how to set up an MPI (message passing interface) computer cluster for running parallel calculations (upcoming).  As a brief explanation of what's going on, many of the MPI (OpenMPI, MPICH, MPICH2) set-up procedures you may stumble across online describe how to use the network file system (NFS) protocol to set up one directory on a host node (head node/server node/master node/whatever) of your cluster so that, by mounting a directory on a guest node (client node/slave node/whatever) to this network-accessible drive, the head and guest nodes all see the same work directory and executables (both MPI and your program of choice).  There are more clever ways to set the cluster up that will likely run at a slightly faster pace than NFS may allow, but we'll ignore that at the moment.  The install procedure below is Ubuntu-specific only in the apt-get stage (NFS support is not part of the default installation).  After all of the components are installed (post-apt-get), the setup should be Linux-universal.

LEGEND

Text in black – my ramblings.

Text in bold red – things you will type in the Terminal

Text in green – text you will either see or will type into files (using pico, my preference)

Below is all I need to do in Ubuntu to do what I need it to do.  I'll be dividing the installation procedure into HOST and GUEST sections for organizational purposes.

1. HOST Node Installation

a. sudo apt-get install nfs-kernel-server nfs-common portmap

My apt-get list is becoming gigantic as part of the cluster work, but I'm only focusing on NFS right now (if you intend on running any MPI-based code, you also need to include SSH), so we only need to deal with these three packages (they will also install libevent1, libgssglue1, libnfsidmap2, and librpcsecgss3, but that's the beauty of letting apt-get do the dirty work.  You can see this previous post if you intend on finding yourself installing any programs in Ubuntu while [shiver] not online).  You'll see plenty of output and, hopefully, no errors.

b. sudo dpkg-reconfigure portmap

Installing packages with NFS in the title makes sense.  What's the deal with portmap?  NFS uses remote procedure calls (RPCs) for communication.  Portmap is a server/service that maps these RPCs to their proper services (as a translator between RPC numbers and DARPA port numbers. Yup.), thereby directing cluster traffic.

The portmap configuration file (/etc/default/portmap) looks as below:

# Portmap configuration file
#
# Note: if you manually edit this configuration file,
# portmap configuration scripts will avoid modifying it
# (for example, by running 'dpkg-reconfigure portmap').

# If you want portmap to listen only to the loopback
# interface, uncomment the following line (it will be
# uncommented automatically if you configure this
# through debconf).
#OPTIONS="-i 127.0.0.1"

For the purposes of the cluster work to be done in upcoming posts, you do NOT want to uncomment the OPTIONS line (which would then bind loopback), not that you would start randomly uncommenting lines in the first place.

c. sudo /etc/init.d/portmap restart

If you make changes to the /etc/default/portmap configuration file, reconfigure portmap, etc., you'll need to restart portmap for the changes to be implemented.  It is recommended that you run this restart upon installation regardless (especially having run dpkg-reconfigure portmap above).

d. sudo mkdir /[work_directory]

This makes the directory to be shared among all of the other cluster machines.  Call it what you will.

e. sudo chmod a+wrx /[work_directory]

We now provide carte blanche to this directory so anyone can read, write, and execute programs in this directory.

f. sudo pico /etc/exports

The last file modification step on the HOST node will mount the /[work_directory] as an NFS-accessible directory to the GUEST machines and preserve this NFS accessibility until you change the settings (or the directory), preserving the accessibility upon reboot.

# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync) hostname2(ro,sync)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt)
# /srv/nfs4/homes  gss/krb5i(rw,sync)
#
/[work_directory] *(rw,sync)

To translate:  * = open up to all clients; rw = read-write priviledges to specified clients; sync = commit all changes to the disk before the server responds to some request making a change to the disk.  It is also worded as "read/write and all transfers to disk are committed to the disk before the write request by the client is completed" and "this option does not allow the server to reply to requests before the changes made by the request are written to the disk," which may or may not help to enlighten.  Basically, it makes sure the NEXT modification to some file doesn't occur until the CURRENT operation on that file is complete.

This only scratches the surface of all things /etc/exports but is enough for my purposes (anyone can mount the drive and read/write.  If your machine is online, let SSH take care of the rest of it).

g. sudo /etc/init.d/nfs-kernel-server restart

Having made the changes to /etc/exports, we restart the NFS server to commit those changes to the operating system.

h. sudo exportfs -a

If you RTFM, you know "the exportfs command is used to maintain the current table of exported file systems for NFS. This list is kept in a separate file named /var/lib/nfs/xtab which is read by mountd when a remote host requests access to mount a file tree, and parts of the list which are active are kept in the kernel's export table." (see HERE for more info).

With all of that completed, you will now have a network-accessible drive /[work_directory] sitting on the HEAD node.

Before moving on to the GUEST nodes, now what?  Again, based on other MPI and cross-network installation descriptions, a completely reasonable thing to do is build ALL of your cluster-specific programs (MPI and calculation programs) into /[work_directory].  You will then mount this directory on the client machines and set the PATHs on these machines to include /[work_directory]. Everyone then sees the same programs.

2. GUEST Node Installation

a. sudo apt-get install portmap nfs-common

This installs portmap and the NFS command files (but not the server) and also "starts up" the client NFS tools.  You should be ready to mount network drives immediately.

b. sudo mkdir /[work_directory]

We make the directory that will have the network drive mounted (the same name as the directory on the HOST node).  I've placed it in the same location in the directory hierarchy as it exists on the HEAD node (sitting right in /, not in /mnt or whatever).

For immediate access -> c1. sudo mount HOST_MACHINE:/[work_directory] /[work_directory]

Here, HOST_MACHINE may be an IP address (often something like 192.168.nn.nn or 10.1.nn.nn if you've set up your cluster on a switch, the actual IP address for the HOST machine if you know it (type ifconfig on the HOST machine to find out)) or a domain name (head.campus.edu, for instance).  It's that simple (if everything installed properly).

For long-term, automated access -> c2a. sudo pico /etc/fstab

If you want to make this connection permanent (and this is a very good idea on a cluster), you can modify /etc/fstab by adding the following line:

HOST_MACHINE:/[work_directory] /[work_directory] nfs   rw   0   0

Then type

c2b. sudo mount /[work_directory]

As for additional information and discussion, there is quite a bit online already (as you might expect), with a long Ubuntu thread on the subject at ubuntuforums.org/showthread.php?t=249889.  For a bit more technical information on the subject, check out www.troubleshooters.com/linux/nfs.htm.

www.ubuntu.com
en.wikipedia.org/wiki/Message_Passing_Interface
www.open-mpi.org
www.mcs.anl.gov/research/projects/mpi/mpich1
www.mcs.anl.gov/research/projects/mpich2
en.wikipedia.org/wiki/Network_file_system
www.debian.org/doc/manuals/apt-howto
en.wikipedia.org/wiki/Secure_Shell
www.somewhereville.com/?p=616
en.wikipedia.org/wiki/Remote_procedure_call
www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-server-config-exports.html
www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-nfs-server-export.html
linux.about.com/library/cmd/blcmdl8_exportfs.htm
ubuntuforums.org/showthread.php?t=249889
www.troubleshooters.com/linux/nfs.htm

Compiling Single-Precision And Double-Precision GROMACS 3.3.3 With OpenMPI 1.2.6 Under OSX 10.5 (Leopard)

The following is a cookbook for compiling single-precision and double-precision GROMACS 3.3.3 in OSX 10.5 (specifically on a MacBook Pro, but most likely general to all things running 10.5) with OpenMPI 1.2.6. My hope is that this saves someone several hours of unnecessary work trying to overcome an otherwise unknown incompatibility between MPICH2 and GROMACS 3.3.3 in Leopard (10.5).

The content of this page is a result of an unsuccessful several hours of trying to build GROMACS-MPI versions with MPICH2 1.0.6 and 1.0.7. MPICH2 compiles just fine with standard options, including a successful testing of mpd and mpdtrace. GROMACS 3.3.3 compiles just fine with standard options for both single-precision and double-precision. Compiling GROMACS with MPI support yields an mdrun file that fails in the following manner:

[host:XXXX1] *** Process received signal ***
[host:XXXX1] Signal: Segmentation fault (11)
[host:XXXX1] Signal code: Address not mapped (1)
[host:XXXX1] Failing at address: 0xffffff94
[ 1] [0x00000000, 0xffffff94] (FP-)
[host:XXXX2] *** Process received signal ***
[host:XXXX2] Signal: Bus error (10)
[host:XXXX2] Signal code: (2)
[host:XXXX2] Failing at address: 0x2
[host:XXXX1] *** End of error message ***
[ 1] [0x00000000, 0x00000002] (FP-)
[marlin:XXXX2] *** End of error message ***

This problem, it has been discovered, is remedied by using OpenMPI 1.2.6. Not a proper diagnosis of the MPICH2/GROMACS problem, but my primary concern is getting MPI working in order to use both cores on my MacBook Pro to get calculations running, not fixing software bugs to get someone else's calculations running (when there's an easier fix, anyway).

Installation Procedure

You'll need the following three files (relevant versions and links are subject to change!):

fftw-3.1.2, http://www.fftw.org/

According to the website:

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, i.e. the discrete cosine/sine transforms or DCT/DST).

FFTW compiles double-precision libraries by default. You'll need to build single-precision libraries for the single-precision GROMACS install.

gromacs-3.3.3, http://www.gromacs.org/

The newest version. Compilation of version 3.3.2 leads to various problems under OSX 10.5 (undiagnosed here, but certainly reproducible).

openmpi-1.2.6, http://www.open-mpi.org/

This is the MPICH2 replacement (meaning I've used MPICH2 for other programs). Again, I'm not playing favorites on a version-to-version level. On my MacBook Pro running OSX 10.5, OpenMPI does what I need, MPICH2 doesn't.

GROMACS uses MPI to distribute processes between processors. If you don't have GROMACS compiled with MPI, you don't have multi-processor calculations (simple enough). That is to say, the Activity Monitor reads 100% for one processor and residual % for all other processes on the other chip (this was a point of confusion in a few discussions, in case someone wonders why I would bother bringing such a point up).

Associated Notes On Compilation

A few important notes to be aware of for the compilation.

XCode 3.0, http://developer.apple.com/tools/xcode/

If you've not installed XCode 3.0 yet, you should in order to get compilers and libraries installed on your machine. XCode is the official programmer's toolbox under OSX and installs far more than you'll need (but it never hurts). Note that this is a BIG download (1.1 GB) but is a far less taxing route to getting all of the necessary files installed on your machine than relying on (many, many) individual downloads.

Root Access

Running "make install" will install files into core directories that are inaccessible to typical OSX User accounts. Access to these directories (such as /usr/local and /usr/local/lib) is granted only to the root user in OSX, which is not activated by default (and a good thing, too). To turn on root access, go to:

Macintosh HD > Applications > Utilities > Directory Utility

In Directory Utility, go to

Edit > Enable Root User

At build time, you'll want to be logged in as the root user (see below).

No Spaces In Folder Names

This goof also took some time to figure out. Regardless of where you start the program build processes from, you need to make sure your directory names have no spaces between words.

>& FILENAME.txt

>& FILENAME.txt is used to direct output to FILENAME.txt, providing a way to follow each step in the process for error checking and the like. For diagnosis purposes, the files from my build are provided here for the GROMACS work. Click on each name to open the log file.

Installation Process

How to follow the text below:

1. Step in the process (one per program, significant activity)

1a. Text that follows the numeric-alphabetical label is what needs to be typed in at the Terminal window prompt. Descriptive/explanatory text follows below each step.

The installation is assumed to be set up as follows: the GROMACS, OpenMPI, and FFTW .tar files (I suspect downloading will automatically unzip them) exist in a directory on your Desktop (say, INSTALL_DIRECTORY_NAME). Here's the compilation procedure:

1. Open a Terminal (Macintosh HD > Applications > Utilities > Terminal). If you don't have this application on your Dock, I recommend putting it there. You'll certainly be getting some use out of it.

2. The Terminal window defaults to your User directory. First we change our user to root (for full access), then cd into the install directory.

2a. su

(put in root password at prompt)

at prompt, type:

2b. csh
2c. cd Desktop/INSTALL_DIRECTORY_NAME

3. untar the GROMACS, OpenMPI, and FFTW .tar downloads (again, I suspect they will have already been unzipped as part of the download). You'll end up with three new directories in this folder.

3a. tar -xvf fftw-3.1.2.tar
3b. tar -xvf gromacs-3.3.3.tar
3c. tar -xvf openmpi-1.2.6.tar

4. Build FFTW

We'll compile fftw-3.1.2 first. For the single-precision build, the steps are:

4a. cd fftw-3.1.2
4b. ./configure --enable-float --enable-threads >& configure_fftw_single.txt
4c. make >& make_fftw_single.txt
4d. make install >& make_install_fftw_single.txt
4e. make distclean >& make_distclean_fftw_single.txt
4f. cd ..

This series of steps configures (configure), builds (make), moves (install) and cleans up (distclean). The ">&" directs output to the .txt files for diagnosis (all of which you can leave off). To follow step progress, open a new Terminal window, get to the directory of interest and type:

4g. tail -f FILENAME_OF_INTEREST.txt

To build the double-precision version (FFTW default and what you're better off building anyway)

4h. cd fftw-3.1.2
4i. ./configure --enable-threads >& configure_fftw_double.txt
4j. make >& make_fftw_double.txt
4k. make install >& install_fftw_double.txt
4l. make distclean >& distclean_fftw_double.txt
4m. cd ..

5. Build OpenMPI

This build could not be easier. This build will place the OpenMPI 1.2.6 executables into /usr/local/openmpi (in the interest of organization).

5a. cd openmpi-1.2.6
5b. ./configure --prefix=/usr/local/openmpi >& configure_openmpi.txt
5c. make >& make_openmpi.txt
5d. make install >& install_openmpi.txt
5e. make distclean >& distclean_openmpi.txt
5f. cd ..

6. Build GROMACS

The following four builds will yield all four possible combinations of GROMACS (single-precision and double-precision) and MPI (without and with). Do you need four builds of GROMACS? Definitely not. If you're reading this page, you're probably interested only in double-precision and MPI (the last of the four). The four builds listed below were performed for testing purposes only. I only ever use the double-precision MPI version when I'm doing my own work.

Note: Each build included the generation of links to executables that go into /usr/local/bin. Accordingly, all four versions are accessible from the Terminal window without having to specify directories (and each version has individually named files as defined in –program-suffix= ).

GROMACS Single-Precision

This will install a single-precision GROMACS build into /usr/local/gromacs333_single with no specific file suffix.

6a. cd gromacs-3.3.3
6b. ./configure --prefix=/usr/local/gromacs333_single >& gromacs_single_configure.txt
6c. make >& gromacs_single_make.txt
6d. make install >& gromacs_single_install.txt
6e. make links >& gromacs_single_links.txt
6f. make distclean >& gromacs_single_distclean.txt
6g. cd ..

GROMACS Single-Precision With OpenMPI

This will install a single-precision MPI-GROMACS build into /usr/local/gromacs333_single_mpi with the file suffix _mpi.

6h. cd gromacs-3.3.3
6i. ./configure --enable-mpi --program-suffix=_mpi --prefix=/usr/local/gromacs333_single_mpi
>& gromacs_single_mpi_configure.txt
6j. make >& gromacs_single_mpi_make.txt
6k. make install >& gromacs_single_mpi_install.txt
6l. make links >& gromacs_single_mpi_links.txt
6m. make distclean >& gromacs_single_mpi_distclean.txt
6n. cd ..

GROMACS Double-Precision

This will install a double-precision GROMACS build into /usr/local/gromacs333_double with the file suffix _d.

6o. cd gromacs-3.3.3
6p. ./configure --program-suffix=_d --prefix=/usr/local/gromacs333_double --enable-double
>& gromacs_double_configure.txt
6q. make >& gromacs_double_make.txt
6r. make install >& gromacs_double_install.txt
6s. make links >& gromacs_double_links.txt
6t. make distclean >& gromacs_double_distclean.txt
6u. cd ..

GROMACS Double-Precision With OpenMPI

This will install a double-precision MPI-GROMACS build into /usr/local/gromacs333_double_mpi with the file suffix _mpi_d.

6v. cd gromacs-3.3.3
6w. ./configure --enable-mpi --program-suffix=_mpi_d --prefix=/usr/local/gromacs333_double_mpi
--enable-double >& gromacs_double_mpi_configure.txt
6x. make >& gromacs_double_mpi_make.txt
6y. make install >& gromacs_double_mpi_install.txt
6z. make links >& gromacs_double_mpi_links.txt
6aa. make distclean >& gromacs_double_distclean_mpi.txt
6ab. cd ..

Running MPI-GROMACS Calculations (Non-specific)

Running MPI-GROMACS is straightforward, requiring only one additional step and a few changes to how you use grompp and mdrun.

a. mpd &
b. grompp_mpi_d -np 2

-np 2 is added to tell grompp that two processors are being used. mdrun expects the same number of processors as was specified in grompp.

c. mpirun -np 2 /usr/local/gromacs333_double_mpi/bin/mdrun_mpi_d -np 2

mpirun first calls the number of processors to use (-np 2) and then requires specification of the path to the mdrun executable (/usr/local/gromacs333_double_mpi). mdrun requires specifying the number of processors as well (-np 2) as per the specification in grompp.

And Finally…

Like all first draft cookbooks, this might be missing some important spices. If, by any chance, you found this page and have recommendations to improve speed, stability, organization, etc., please drop a line and I'll keep track of modifications.

en.wikipedia.org/wiki/Single_precision
en.wikipedia.org/wiki/Double_precision
www.gromacs.org
www.apple.com/macosx/
en.wikipedia.org/wiki/Mac_OS_X_v10.5
www.apple.com/macbookpro/
www.open-mpi.org/
www.mcs.anl.gov/research/projects/mpich2/
www.gromacs.org/documentation/reference/online/mdrun.html
www-unix.mcs.anl.gov/mpi/
www.fftw.org
en.wikipedia.org/wiki/Activity_monitor
developer.apple.com/tools/xcode/
en.wikipedia.org/wiki/NetInfo_Manager
en.wikipedia.org/wiki/Terminal_(application)
en.wikipedia.org/wiki/Dock_(computing)
en.wikipedia.org/wiki/Tar_file
www.gromacs.org/documentation/reference/online/grompp.html