Compiling And Running GAMESS-US (1 May 2013(R1)) On 64-bit Ubuntu 12.X/13.X In SMP Mode

Author's Note 1: It is my standard policy to put too much info into guides so that those who are searching for specific problems they come across will find the offending text in their searches. With luck, your "build error" search sent you here.

Author's Note 2: It's not as bad as it looks (I've included lots of output and error messages for easy searching)!

Author's Note 3: I won't be much help for you in diagnosing your errors, but am happy to tweak the text below if something is unclear.

Conventions: I include both the commands you type in your Terminal and some of the output from these commands, the output being where most of the errors appear that I work on in the discussion.

Input is formatted as below:

username – your username (check your prompt)
machinename – your hostname (type hostname or check your prompt)

Text you put in at the (also shown, so you see the directory structure) prompt (copy + paste should be fine)

Text you get out (for checking results and reproducing errors)

Having just recently downloaded the newest version of GAMESS-US (R1 2013), my first few passes at using it under Linux (specifically, Ubuntu 12.04) ran into a few walls that required some straightforward modifications and a little bit of system prep planning. As my first few passes before successful execution are likely the same exact problems you might have run into in your attempts to get GAMESS-US to run (after a successful compilation and linking), I'm posting my problems and solutions here.

Qualifier 1 – My concern at the moment has been to get GAMESS-US to run under 64-bit Ubuntu 12.04 on a multi-core board (ye olde symmetric multiprocessing (which I always called single multi-processor, or SMP)). While some answers may follow in what's below, this post doesn't cover MPI-specific builds (nothing through a router, that is). SMP is the only concern (which is to say, I likely won't have good answers if you send along an MPI-specific question). Also, although I'm VERY interested in trying it, I've not yet attempted to build a GPU-capable version (but plan to in the near future).

Qualifier 2 – It is my standard policy to install apps into /opt, and my steps below will reflect that (specifically because there's a permission issue that needs to be addressed when you first try to build components). You can default to whatever you like, but keep in mind my tweaks when you try to build your local copy.

So, with the qualifiers in mind…

1. Prepping The System (apt-get)

There are few things better than being able to apt-get everything you need to prep your machine for an install, and I'm pleased to report that the (current) process for putting the important files onto Ubuntu 12.X/13.X is easy. Assuming you're not going the Intel / PGI / MKL route, you can do everything by installing gfortran (compiler, presently installing 4.4) and the blas and atlas math libraries.

username@machinename:~$ sudo apt-get install gfortran libblas-dev libatlas-base-dev

Note: your atlas libraries will be installed in /usr/lib64/atlas/ – this will matter when you run config.

After these finish, run the following to determine your installed gfortran version (will be asked for by the new GAMESS config)

username@machinename:~$ gfortran -dumpversion

GNU Fortran (Ubuntu 4.4.3-4ubuntu5.1) 4.4.3
Copyright (C) 2010 Free Software Foundation, Inc.

GNU Fortran comes with NO WARRANTY, to the extent permitted by law.
You may redistribute copies of GNU Fortran
under the terms of the GNU General Public License.
For more information about these matters, see the file named COPYING

4.4 And you're ready for GAMESS.

2. Downloading GAMESS-US, Placing Into /opt, And Changing Permissions

First, obviously, get the GAMESS source (click on the red text).

After downloading, copy/move gamess-current.tar.gz into /opt

username@machinename:~$ cd ~/Downloads
username@machinename:~/Downloads$ sudo cp gamess-current.tar.gz /opt
username@machinename:~/Downloads$ cd /opt
username@machinename:/opt$ sudo gunzip gamess-cuerent.tar.gz
username@machinename:/opt$ sudo tar xvd gamess-current.tar

gamess/
gamess/gms-files.csh
gamess/tools/

gamess/misc/count.code
gamess/misc/vbdum.src
gamess/Makefile.in

At this point, if you go through the config process and get to the point of building ddikick.x, you will get an error when you first try to run ./compddi

username@machinename:/opt/gamess/ddi$ sudo ./compddi >& compddi.log &

[1] 4622
-bash: compddi.log: Permission denied

The problem is with the permission of the entire gamess folder:

drwxr-xr-x  4 root        root              4096 2014-04-04 21:43 .
drwxr-xr-x 22 root        root              4096 2013-12-27 16:17 ..
drwxr-xr-x 14 1300 504              4096 2014-04-04 21:43 gamess
-rw-r–r– 1 root        root         198481920 2014-04-04 21:42 gamess-current.tar

Which you remedy before running into this error by changing the permissions:

username@machinename:/opt$ sudo chown -R username gamess

The next step is recommended when you run config, so I'm performing the step here to get it out of the way. With the atlas libraries installed, generate two symbolic links.

username@machinename:/opt$ cd /usr/lib64/atlas
username@machinename:/usr/lib64/atlas$ sudo ln -s libf77blas.so.3.0 libf77blas.so
username@machinename:/usr/lib64/atlas$ sudo ln -s libatlas.so.3.0 libatlas.so

And, at this point, you're ready to run the new (well, new to me) config script that preps your system install.

3. Building GAMESS-US

Back to the GAMESS-US folder.

username@machinename:/usr/lib64/atlas$ cd /opt/gamess
username@machinename:/opt/gamess$ sudo ./config

This script asks a few questions, depending on your computer system,
to set up compiler names, libraries, message passing libraries,
and so forth.

You can quit at any time by pressing control-C, and then .

Please open a second window by logging into your target machine,
in case this script asks you to 'type' a command to learn something
about your system software situation. All such extra questions will
use the word 'type' to indicate it is a command for the other window.

After the new window is open, please hit to go on.

You can open that second window or blindly assume that what I include below is all you need.

[enter]

GAMESS can compile on the following 32 bit or 64 bit machines:
axp64 – Alpha chip, native compiler, running Tru64 or Linux
cray-xt – Cray's massively parallel system, running CNL
hpux32 – HP PA-RISC chips (old models only), running HP-UX
hpux64 – HP Intel or PA-RISC chips, running HP-UX
ibm32 – IBM (old models only), running AIX
ibm64 – IBM, Power3 chip or newer, running AIX or Linux
ibm64-sp – IBM SP parallel system, running AIX
ibm-bg – IBM Blue Gene (P or L model), these are 32 bit systems
linux32 – Linux (any 32 bit distribution), for x86 (old systems only)
linux64 – Linux (any 64 bit distribution), for x86_64 or ia64 chips
AMD/Intel chip Linux machines are sold by many companies
mac32 – Apple Mac, any chip, running OS X 10.4 or older
mac64 – Apple Mac, any chip, running OS X 10.5 or newer
sgi32 – Silicon Graphics Inc., MIPS chip only, running Irix
sgi64 – Silicon Graphics Inc., MIPS chip only, running Irix
sun32 – Sun ultraSPARC chips (old models only), running Solaris
sun64 – Sun ultraSPARC or Opteron chips, running Solaris
win32 – Windows 32-bit (Windows XP, Vista, 7, Compute Cluster, HPC Edition)
win64 – Windows 64-bit (Windows XP, Vista, 7, Compute Cluster, HPC Edition)
winazure – Windows Azure Cloud Platform running Windows 64-bit
type 'uname -a' to partially clarify your computer's flavor.
please enter your target machine name:

We're doing a linux64 build, so type the following at the prompt:

linux64

Where is the GAMESS software on your system?
A typical response might be /u1/mike/gamess,
most probably the correct answer is /opt/gamess

GAMESS directory? [/opt/gamess]

Who is this mike and where is my folder u1? We'll get to that in rungms. For now, I'm installing in /opt, so the default directory is fine:

[enter]

Setting up GAMESS compile and link for GMS_TARGET=linux64
GAMESS software is located at GMS_PATH=/opt/gamess

Please provide the name of the build locaation.
This may be the same location as the GAMESS directory.

GAMESS build directory? [/opt/gamess]

Fine as selected.

[enter]

Please provide a version number for the GAMESS executable.
This will be used as the middle part of the binary's name,
for example: gamess.00.x

Version? [00]

Is this important? Maybe, if you plan on building multiple versions of GAMESS-US (you might want a GPU-friendly version, one with a different compiler, one with MPI, etc.). Number as you wish and remember the number when it comes to rungms. That said, the actual linking step seems to really want to produce a 01 version (we'll get to that). Meantime, default value is fine.

[enter]

Linux offers many choices for FORTRAN compilers, including the GNU
compiler set ('g77' in old versions of Linux, or 'gfortran' in
current versions), which are included for free in Unix distributions.

There are also commercial compilers, namely Intel's 'ifort',
Portland Group's 'pgfortran', and Pathscale's 'pathf90'. The last
two are not common, and aren't as well tested as the others.

type 'rpm -aq | grep gcc' to check on all GNU compilers, including gcc
type 'which gfortran' to look for GNU's gfortran (a very good choice),
type 'which g77' to look for GNU's g77,
type 'which ifort' to look for Intel's compiler,
type 'which pgfortran' to look for Portland Group's compiler,
type 'which pathf90' to look for Pathscale's compiler.
Please enter your choice of FORTRAN:

We're using gfortran (currently 4.4.3):

gfortran

gfortran is very robust, so this is a wise choice.

Please type 'gfortran -dumpversion' or else 'gfortran -v' to
detect the version number of your gfortran.
This reply should be a string with at least two decimal points,
such as 4.1.2 or 4.6.1, or maybe even 4.4.2-12.
The reply may be labeled as a 'gcc' version,
but it is really your gfortran version.
Please enter only the first decimal place, such as 4.1 or 4.6:

4.4

Alas, your version of gfortran does not support REAL*16,
so relativistic integrals cannot use quadruple precision.
Other than this, everything will work properly.
hit to continue to the math library setup.

If this was my biggest concern I'd be a happy quantum chemist. Obviously you can try to install other flavors of gfortran and, possibly, by the time you need the procedure I'm following, a newer version of gfortran will be apt-gotten.

[enter]

Linux distributions do not include a standard math library.

There are several reasonable add-on library choices,
MKL from Intel for 32 or 64 bit Linux (very fast)
ACML from AMD for 32 or 64 bit Linux (free)
ATLAS from www.rpmfind.net for 32 or 64 bit Linux (free)
and one very unreasonable option, namely 'none', which will use
some slow FORTRAN routines supplied with GAMESS. Choosing 'none'
will run MP2 jobs 2x slower, or CCSD(T) jobs 5x slower.

Some typical places (but not the only ones) to find math libraries are
Type 'ls /opt/intel/mkl' to look for MKL
Type 'ls /opt/intel/Compiler/mkl' to look for MKL
Type 'ls /opt/intel/composerxe/mkl' to look for MKL
Type 'ls -d /opt/acml*' to look for ACML
Type 'ls -d /usr/local/acml*' to look for ACML
Type 'ls /usr/lib64/atlas' to look for Atlas

Enter your choice of 'mkl' or 'atlas' or 'acml' or 'none':

atlas

Where is your Atlas math library installed? A likely place is
/usr/lib64/atlas
Please enter the Atlas subdirectory on your system:

Our location is, in fact, /usr/lib64/atlas, so we type it in accordingly.

NOTE: If you don't type anything but [enter] below, the script closes (/usr/lib64/atlas is listed as the expected location, but it is not defaulted by the script. You need to type it in.

/usr/lib64/atlas
 

The linking step in GAMESS assumes that a softlink exists
within the system's /usr/lib64/atlas
from libatlas.so to a specific file like libatlas.so.3.0
from libf77blas.so to a specific file like libf77blas.so.3.0
config can carry on for the moment, but the 'root' user should
chdir /usr/lib64/atlas
ln -s libf77blas.so.3.0 libf77blas.so
ln -s libatlas.so.3.0 libatlas.so
prior to the linking of GAMESS to a binary executable.

Math library 'atlas' will be taken from /usr/lib64/atlas

please hit to compile the GAMESS source code activator

The symbolic linking was performed before the GAMESS steps.

[enter]

gfortran -o /home/username/gamess/tools/actvte.x actvte.f
unset echo
Source code activator was successfully compiled.

please hit to set up your network for Linux clusters.

[enter]

If you have a slow network, like Gigabit Ethernet (GE), or
if you have so few nodes you won't run extensively in parallel, or
if you have no MPI library installed, or
if you want a fail-safe compile/link and easy execution,
choose 'sockets'
to use good old reliable standard TCP/IP networking.

If you have an expensive but fast network like Infiniband (IB), and
if you have an MPI library correctly installed,
choose 'mpi'.

communication library ('sockets' or 'mpi')?

Again, I'm not building an mpi-friendly version, so am using sockets.

sockets

64 bit Linux builds can attach a special LIBCCHEM code for fast
MP2 and CCSD(T) runs. The LIBCCHEM code can utilize nVIDIA GPUs,
through the CUDA libraries, if GPUs are available.
Usage of LIBCCHEM requires installation of HDF5 I/O software as well.
GAMESS+LIBCCHEM binaries are unable to run most of GAMESS computations,
and are a bit harder to create due to the additional CUDA/HDF5 software.
Therefore, the first time you run 'config', the best answer is 'no'!
If you decide to try LIBCCHEM later, just run this 'config' again.

Do you want to try LIBCCHEM? (yes/no):

no

Your configuration for GAMESS compilation is now in
/home/username/gamess/install.info
Now, please follow the directions in
/home/username/gamess/machines/readme.unix
username@machinename:~/gamess$

At this stage, you're ready to build ddikick.x and continue with the compiling.

4. Build ddikick.x

username@machinename:/opt/gamess$ cd ddi
username@machinename:/opt/gamess/ddi$ sudo ./compddi >& compddi.log &

Will dump output into compddi.log (which will now work with the correct permissions).

username@machinename:/opt/gamess/ddi$ sudo mv ddikick.x ..
username@machinename:/opt/gamess/ddi$ cd ..
username@machinename:/opt/gamess$ sudo ./compall >& compall.log &

Feel free to follow along as compall.log dumps results. You're also welcome to follow the readme.unix advice:

This takes a while, so go for coffee, or check the SF Giants web page.

Upon completion, the last step is to link the executable.

Now, it used to be the case that you specified the version number in the lked step. So, if you wanted to stick with the 00 version from the config file, you'd type

username@machinename:/opt/gamess$ sudo ./lked gamess 00 >& lked.log &

When you do that at present, you get

[1] 7626
username@machinename:/opt/gamess$

[1]+ Stopped sudo ./lked gamess 00 &>lked.log

This then leads you to use the lked call from the readme.unix file.

username@machinename:/opt/gamess$ sudo ./lked gamess 01 >& lked.log &

Which then produces lked.log and gamess.01.x.

Now, if you run with 00 again, you get a successful linking of gamess.00.x . Not sure why this happens, but the version number isn't important so long as you specify the right one when you use rungms (so I've not diagnosed it further).

At this point, you have a gamess.00.x and/or gamess.01.x executable in your /opt/gamess folder:

30828747 2014-04-04 22:41 gamess.01.x

I'm going to ignore the 00 issue out of the config file and use the gamess.01.x executable.

We're ready to run calculations and work through the next set of errors you'll receive if you don't properly modify files.

5. PATH Setting

First, we copy rungms to our home folder, then add /opt/gamess to the PATH:

username@machinename:/opt/gamess$ cp rungms ~/
username@machinename:/opt/gamess$ cd ~/
username@machinename:~$ nano .bashrc

Add the following to the bottom of .bashrc (or extend your PATH)

PATH=$PATH:/opt/gamess

Quit nano and source.

username@machinename:~$ source .bashrc
[OPTIONAL] username@machinename:~$ echo $PATH

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:…/opt/gamess:

6. rungms (Probably Why You're Here)

If you just go blindly into a run, you'll get the following error:

username@machinename:~$ ./rungms test.inp

—– GAMESS execution script 'rungms' —–
This job is running on host machinename
under operating system Linux at Fri Apr 4 22:47:55 EDT 2014
Available scratch disk space (Kbyte units) at beginning of the job is
df: `/scr/username': No such file or directory
df: no file systems processed
GAMESS temporary binary files will be written to /scr/username
GAMESS supplementary output files will be written to /home/username/scr
Copying input file test.inp to your run's scratch directory…
cp test.inp /scr/username/test.F05
cp: cannot create regular file `/scr/username/test.F05': No such file or directory
unset echo
/u1/mike/gamess/gms-files.csh: No such file or directory.

As is obvious, rungms needs some modifying.

username@machinename:~$ nano rungms

Scroll down until you see the following:

set TARGET=sockets
set SCR=/scr/$USER
set USERSCR=~$USER/scr
set GMSPATH=/u1/mike/gamess

Given that it's just me on the machine, I tend to simplify this by making SCR and USERSCR the same directory, and I make them both /tmp. If you intend on keeping all of the files, you'll need to make rungms specific for each run case. My only concerns are .dat and .log, so /tmp dumping is fine. Furthermore, we must change GMSPATH from how the ever-helpful Mike Schmidt (he got me through some early issues when I started my GAMESS-US adventure 15ish years ago. Won't complain about his continued default-ed presence in the scripts) has it set up at Iowa to how we want it on our own machines (in my case, /opt/gamess)

set TARGET=sockets
set SCR=/tmp
set USERSCR=/tmp
set GMSPATH=/opt/gamess

With these modifications, your next run will be a bit more successful:

username@machinename:~$ ./rungms test.inp

—– GAMESS execution script 'rungms' —–
This job is running on host machinename
under operating system Linux at Fri Apr 4 22:51:35 EDT 2014
Available scratch disk space (Kbyte units) at beginning of the job is
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 1905222596 249225412 1559217460 14% /
GAMESS temporary binary files will be written to /tmp
GAMESS supplementary output files will be written to /tmp
Copying input file test.inp to your run's scratch directory…
cp test.inp /tmp/test.F05
unset echo
/opt/gamess/ddikick.x /opt/gamess/gamess.00.x test -ddi 1 1 machinename -scr /tmp

Distributed Data Interface kickoff program.
Initiating 1 compute processes on 1 nodes to run the following command:
/opt/gamess/gamess.00.x test

******************************************************
* GAMESS VERSION = 1 MAY 2013 (R1) *
* FROM IOWA STATE UNIVERSITY *
* M.W.SCHMIDT, K.K.BALDRIDGE, J.A.BOATZ, S.T.ELBERT, *
* M.S.GORDON, J.H.JENSEN, S.KOSEKI, N.MATSUNAGA, *
* K.A.NGUYEN, S.J.SU, T.L.WINDUS, *
* TOGETHER WITH M.DUPUIS, J.A.MONTGOMERY *
* J.COMPUT.CHEM. 14, 1347-1363(1993) *
**************** 64 BIT LINUX VERSION ****************

INPUT CARD>
DDI Process 0: shmget returned an error.
Error EINVAL: Attempting to create 160525768 bytes of shared memory.
Check system limits on the size of SysV shared memory segments.

The file ~/gamess/ddi/readme.ddi contains information on how to display
the current SystemV memory settings, and how to increase their sizes.
Increasing the setting requires the root password, and usually a sytem reboot.

DDI Process 0: error code 911
ddikick.x: application process 0 quit unexpectedly.
ddikick.x: Fatal error detected.
The error is most likely to be in the application, so check for
input errors, disk space, memory needs, application bugs, etc.
ddikick.x will now clean up all processes, and exit…
ddikick.x: Sending kill signal to DDI processes.
ddikick.x: Execution terminated due to error(s).
unset echo
—– accounting info —–
Files used on the master node machinename were:
-rw-r–r– 1 username username 0 2014-04-04 22:51 /tmp/test.dat
-rw-r–r– 1 username username 1341 2014-04-04 22:51 /tmp/test.F05
ls: No match.
ls: No match.
ls: No match.
Fri Apr 4 22:51:36 EDT 2014
0.0u 0.0s 0:01.08 9.2% 0+0k 0+8io 0pf+0w

Things worked, but with a memory error. This issue is discussed at the Baldridge Group wiki: ocikbapps.uzh.ch/kbwiki/gamess_troubleshooting.html

From the wiki:

If you are sure you are not asking for too much memory in the input file, check that your kernel parameters are not allowing enough memory to be requested. You might have to increase the SHMALL & SHMAX kernel memory values to allow GAMESS to run. (See http://www.pythian.com/news/245/the-mysterious-world-of-shmmax-and-shmall/ for a better explanation.)
For example, on a machine with 4GB of memory, you might add these to /etc/sysctl.conf:
# cat /etc/sysctl.conf | grep shm
kernel.shmmax = 3064372224
kernel.shmall = 748137
Then set the new settings like so:
# sysctl -p
Since they are in /etc/sysctl.conf, they will automatically be set each time the system is booted.

In our case, we modify sysctl.conf with the recommendations from the wiki:

username@machinename:~$ sudo nano /etc/sysctl.conf

Add the following to the bottom of the file:

kernel.shmmax = 3064372224
kernel.shmall = 748137

Save and exit.

username@machinename:~$ sudo sysctl -p

net.ipv4.ip_forward = 1
kernel.shmmax = 3064372224
kernel.shmall = 748137

These memory values will change depending on your system.

Now we empty the /tmp and rerun.

username@machinename:~$ rm /tmp/*
username@machinename:~$ ./rungms test.inp

If your input file is worth it's salt, you'll have successfully run your file on a single processor (single core, that is). If you run into additional memory errors, increase kernel.shmmax and kernel.shmall.

Now, onto the SMP part. My first attempt to run games in parallel (on 4 cores using version 00) produced the following error:

username@machinename:~$ rm /tmp/*
username@machinename:~$ ./rungms test.inp 00 4

—– GAMESS execution script 'rungms' —–
This job is running on host machinename
under operating system Linux at Fri Apr 4 22:52:52 EDT 2014
Available scratch disk space (Kbyte units) at beginning of the job is
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 1905222596 249225416 1559217456 14% /
GAMESS temporary binary files will be written to /tmp
GAMESS supplementary output files will be written to /tmp
Copying input file test.inp to your run's scratch directory…
cp test.inp /tmp/test.F05
unset echo
I do not know how to run this node in parallel.

I tried a number of stupid things to get the run to work, finally settling on modifying the rungms file properly. To make gamess know how to run the node in parallel, we need only make the following changes to our rungms file.

username@machinename:~$ nano rungms

Scroll down until you find the section below:

# 2. This is an example of how to run on a multi-core SMP enclosure,
# where all CPUs (aka COREs) are inside a -single- NODE.
# At other locations, you may wish to consider some of the examples
# that follow below, after commenting out this ISU specific part.
if ($NCPUS > 1) then
switch (`hostname`)
case se.msg.chem.iastate.edu:
case sb.msg.chem.iastate.edu:
if ($NCPUS > 2) set NCPUS=4
set NNODES=1

The change is simple. We remove the cases for $NCPUS > 1 in the file and add the hostname of our linux box (and if you don't know this or it's not in your prompt, simply type hostname at the prompt first). We'll disable the two cases listed and add our hostname to the case list.

# 2. This is an example of how to run on a multi-core SMP enclosure,
# where all CPUs (aka COREs) are inside a -single- NODE.
# At other locations, you may wish to consider some of the examples
# that follow below, after commenting out this ISU specific part.
if ($NCPUS > 1) then
switch (`hostname`)
case machinename:
# case se.msg.chem.iastate.edu:
# case sb.msg.chem.iastate.edu:
if ($NCPUS > 2) set NCPUS=4
set NNODES=1

This gives you parallel functionality, but it's still not using the machine resources (cores) correctly when I ask for anything more than 2 cores (always using only 2 cores).

[minor complaint]
Admittedly, I don't immediately get the logic of this section as currently coded, as one cannot get more than 2 cores to work in this case given how the if statements are written (so far as I can see now. I will assume I am the one missing something but have not decided to ask about it, instead changing the rungms text to the following). You can check this yourself by running top in another window. This is the most simple modification, and assumes you want to run N number of cores each time. Clearly, you can make this more elegant than it is (my modification, that is). Meantime, I want to run 4 cores on this machine, so I change the section to reflect a 4-core board (and commented out much of this section).
[/complaint]

# 2. This is an example of how to run on a multi-core SMP enclosure,
# where all CPUs (aka COREs) are inside a -single- NODE.
# At other locations, you may wish to consider some of the examples
# that follow below, after commenting out this ISU specific part.
if ($NCPUS > 1) then
switch (`hostname`)
case machinename
# case se.msg.chem.iastate.edu:
# case sb.msg.chem.iastate.edu:
# if ($NCPUS > 2) set NCPUS=2
# set NNODES=1
# set HOSTLIST=(`hostname`:cpus=$NCPUS)
# breaksw
# case machinename
# case br.msg.chem.iastate.edu:
if ($NCPUS >= 4) set NCPUS=4
set NNODES=1
set HOSTLIST=(`hostname`:cpus=$NCPUS)
breaksw
case machinename
# case cd.msg.chem.iastate.edu:
# case zn.msg.chem.iastate.edu:
# case ni.msg.chem.iastate.edu:
# case co.msg.chem.iastate.edu:
# case pb.msg.chem.iastate.edu:
# case bi.msg.chem.iastate.edu:
# case po.msg.chem.iastate.edu:
# case at.msg.chem.iastate.edu:
# case sc.msg.chem.iastate.edu:
# if ($NCPUS > 4) set NCPUS=4
# set NNODES=1
# set HOSTLIST=(`hostname`:cpus=$NCPUS)
# breaksw
# case ga.msg.chem.iastate.edu:
# case ge.msg.chem.iastate.edu:
# case gd.msg.chem.iastate.edu:
# if ($NCPUS > 6) set NCPUS=6
# set NNODES=1
# set HOSTLIST=(`hostname`:cpus=$NCPUS)
# breaksw
default:
echo I do not know how to run this node in parallel.
exit 20
endsw
endif
#

And, with this set of changes, I'm using all 4 cores on the board (but have some significant memory issues when running MP2 calks. But that's for another post).

The typical user will never be able to do what the GAMESS group has done in making an excellent program that also happens to be free. That said, the need to make changes to the rungms file is something that would be greatly simplified by having N number of rungms scripts for each case instead of a monolithic file that is mostly useless text to users not using one of the system types. This, for instance, would make rungms modification much easier. If I streamline rungms for my specific system, I may post a new file accordingly.

Amber And Ubuntu Part 2. Amber10 (Parallel Execution) Installation In Ubuntu 8.10 (Intrepid Ibex) With OpenMPI 1.3… And Commentary

After considerable trial and building/testing errors, what follows is as simplified a complete installation and (non-X11/QM) testing of Amber10 and OpenMPI 1.3 as I think can be procedure'd in Ubuntu 8.10 (and likely previous and subsequent Ubuntu versions), dealing specifically with assorted issues with root permissions and variable definitions as per the standard procedure for Amber10 installation.

I'll begin with the short procedure and bare minimum notes, then will address a multitude of specific problems that may (did) arise during all of the build procedures.  The purpose for listing everything, it is hoped, is to make these errors appear in google during searches so that, when you come/came across the errors, your search will have provided some amount of useful feedback (and, for a few of the problems I had with previous builds of other programs, this blog is the ONLY thing that comes up in google).

Some of the content below is an extension of the single-processor build of Amber10 I posted previously.  In the interest of keeping the reading to a minimum, the short procedure below is light on explanations that are provided in the long procedure that follows.  If you're running on a multi-core computer (and who isn't anymore), you likely want to take advantage of the MPI capability, so I would advise simply following the procedure below and IGNORING the previous post (as I also state at the top of the previous page).

Enough blabbering.

LEGEND

Text in black – my ramblings.

Text in bold preformatted red - things you will type in the Terminal

Text in green – text you will either see or will type into files (using pico, my preference)

Amber10, OpenMPI 1.3, Ubuntu 8.10: The Easy, Teenage New York Version

0a. I assume you're working from a fresh installation of Ubuntu 8.10.  Some of what is below is Ubuntu-specific because of the way that Ubuntu decides to deal with the Root/Administrator/User relationship.  That said, much is the same and I'll try to remark accordingly.

0b. As with the previous post, I am writing this with an expected audience of a non-technical Linux user, which means more detail of the steps and not just "cd $HOME and ./config -> make -> make install."  If the steps are too obvious to you, find someone who thinks a Terminal window is one that won't close and let them do it.  Speaking of…

0c. Everything below is done from a Terminal window.  The last icon you will use is the Terminal icon you click on.  If you've never had the pleasure in Ubuntu Desktop, go to Applications -> Accessories -> Terminal.  You can save yourself some time dragging around menus by hovering over the Terminal icon, right-clicking, and "Add this launcher to panel."

0d. One more thing for the absolute newbies.  I am assuming that you've used Firefox to download OpenMPI 1.3, the AmberTools 1.2 and Amber10 bz2 files, and the bugfix.all files for AmberTools 1.2 and Amber10 (more on that in 0e).  That's all you'll need for the installation below.  Note that I'm not dealing with X11 (library specifications and other testing to do first), MKL (the Intel Math Kernel Library), or the GOTO BLAS libraries.  The default download folder for Firefox in a fresh installation of Ubuntu is the Desktop.  I will assume your files are on the Desktop, which will then be moved around in the installation procedure below.

0e. bugfix.all (Amber10) and bugfix.all (AmberTools 1.2) – Yes, the Amber people provide both of these files for the two different programs using the same name.  As I assume you're using Firefox to download these files from their respective pages, I recommend that you RENAME the bugfix.all file for AmberTools 1.2 to bugfix_at.all, which is the naming convention I use in the patching step of the installation process (just to keep the confusion down).  In case you've not had the pleasure yet, instead of clicking on the bugfix.all link on each page and saving the file that loads, simply right-click and "Save Link As…", then save the AmberTools 1.2 bugfix.all file as bugfix_at.all.

0f. This OpenMPI 1.3 build and series of Amber10 tests assumes SMP (symmetric multi-processing) only, meaning you run only with the CPUs on your motherboard.  The setup of OpenMPI for cluster-based computing is a little more complicated and is not presented below (but is in process for write-up).

0g. And, further, I am not using the OpenMPI you can install by simply sudo apt-get install libopenmpi1 linopenmpi-dev openmpi-bin openmpi-doc for two reasons.  First, that installs OpenMPI 1.2.8 (I believe), which mangles one of the tests in such a way that I feared for subsequent stability in Amber10 work I might find myself doing.  Second, I want to eventually be able to build OpenMPI around other Fortran compilers (specifically g95 or the Intel Fortran Compiler) or INSERT OPTION X and, therefore, prefer to build from source.

So, from a Terminal window, the mostly comment-free/comment-minimal and almost flawless procedure is as follows.  Blindly assume that everything I am doing, especially in regards to WHERE this build is occurring and WHAT happens in the last few steps, is for a good reason (explained in detail later)…

0. cd $HOME (if you're not in your $HOME directory already or don't know where it is)

1. sudo apt-get update (requires administrative password)

2. sudo apt-get install ssh g++ g++-multilib g++-4.3-multilib gcc-4.3-doc libstdc++6-4.3-dbg libstdc++6-4.3-doc flex bison fort77 netcdf-bin gfortran gfortran-multilib gfortran-doc gfortran-4.3-multilib gfortran-4.3-doc libgfortran3-dbg autoconf autoconf2.13 autobook autoconf-archive gnu-standards autoconf-doc libtool gettext patch libblas3gf liblapack3gf libgfortran2 markdown csh (this is my ever-growing list of necessary programs and libraries that are not installed as part of a fresh Ubuntu installation)

3. pico .bashrc

Add the following lines to the bottom of this file.  This will make more sense shortly…

AMBERHOME=$HOME/Documents/amber10/
export AMBERHOME

MPI_HOME=/
export MPI_HOME

Crtl-X and the Enter Key twice to Exit

4. source .bashrc

5. pico. profile

Add the following line to the bottom of this file.

PATH="$HOME/Documents/amber10/exe:$PATH"

Crtl-X and the Enter Key twice to Exit

6. source .profile

7. mv $HOME/Desktop/Amber* $HOME/Documents/ (this assumes the downloaded files are on the desktop)

8. mv $HOME/Desktop/openmpi-1.3 $HOME/Documents/ (this assumes the downloaded files are on the desktop)

9. cd $HOME/Documents/

10. gunzip openmpi-1.3.tar.gz

11. tar xvf openmpi-1.3.tar

12. cd openmpi-1.3

13. ./configure –prefix=/ (installs the MPI binaries and libraries into /. Avoids library errors I ran across despite MPI_HOME, but may be fixable. Copious output to follow.  For my results, click HERE)

14. sudo make all install (this installs binaries and libraries into "/". Copious output to follow.  For my results, click HERE)

15. cd $HOME/Documents/

16. tar xvjf Amber10.tar.bz2

17. tar xvjf AmberTools-1.2.tar.bz2

18. mv $HOME/Desktop/bugfix* $HOME/Documents/$AMBERHOME (moves bugfix files to the Amber directory)

19. cd $AMBERHOME

20. patch -p0 -N -r patch-rejects < bugfix_at.all (patches AmberTools)

21. patch -p0 -N -r patch-rejects < bugfix.all (patches Amber10)

22. cd src/

23. ./configure_at -noX11

24. make -f Makefile_at (copious output to follow.  For my results, click HERE)

25. cd ../bin/

26. pico mopac.sh

At the top of the file, change sh to bash

Crtl-X and the Enter Key twice to Exit

27. cd ../test/

28. make -f Makefile_at test (copious output to follow.  For my results, click HERE)

29. cd ../src/

30. ./configure_amber -openmpi gfortran

31. make parallel (copious output to follow.  For my results, click HERE)

32. cd $HOME/.ssh/ (at this step, we allow auto-login in ssh so that the multiple mpirun tests do not require that you supply your password constantly)

33. ssh-keygen -t dsa

34. cat id_dsa.pub >> authorized_keys2

35. chmod 644 authorized_keys2

36. cd $AMBERHOME/test/

37. csh

38. setenv DO_PARALLEL 'mpirun -np N' (here, N is the number of processors you wish to use on your mobo)

39. make test.parallel.MM (copious output to follow.  For my results, click HERE)

40. exit (exits the csh shell)

41. sudo cp -r $HOME/Documents/amber10 /opt/amber10

42. cd $HOME

43. rm -r $HOME/Documents/amber10 (this deletes the build directory.  Delete or keep as you like)

44. pico .bashrc

Make the following change to the bottom of this file.

AMBERHOME=/opt/amber10/

Crtl-X and the Enter Key twice to Exit

45. source .bashrc

46. pico .profile

Make the following change to the bottom of this file.

PATH="/opt/amber10/exe:$PATH"

Crtl-X and the Enter Key twice to Exit

47. source .profile

That is it!  In theory, you should now have a complete and tested Amber10 and AmberTools 1.2 build sitting in /opt/amber10.

Amber10, OpenMPI 1.3, Ubuntu 8.10: The Notes

What follows is the complete list of problems, questions, errors, issues, and general craziness from much trial and error for the installation procedure above.  As you can guess from my constant mentioning of Ubuntu in my statements-with-qualifications, some of these problems likely will not occur in other distros.  My primary reason for the extended discussion below is so that the errors and issues make their way into google so that people searching for fixes to these problems (if they come across them) will see actual content (if they choose to read it) and maybe find a reasonable fix.

I'll be expanding in sections by grouping numbers above.

0a – 0g Installation Preparations

I've not much to add here except that OpenMPI is likely not the only way to install MPI Amber10 in Ubuntu, but I think it is easier than MPICH2 to set up cross-cluster calculations on a switch'ed network.  My stronger preference for OpenMPI stems from both past positive experience with OpenMPI and GROMACS (specifically on my Macbook Pro) and eventual success with OpenMPI 1.2.X and Abinit 5.6.5.  I had hoped to use the same version of OpenMPI for GROMACS, Amber10, and Abinit, but ran into a yet-to-be-resolved issue with OpenMPI 1.3.x in the building of Abinit (the problem is, apparently, resolved in the upcoming 1.4.1 build, but I'm not much for using release candidates.  I'll be discussing this in an upcoming Abinit installation post based on my previous Abinit installation post).

1 – 2 apt-get

My listed apt-get installation set contains many, many programs and libraries that are not necessarily needed in the OpenMPI and Amber10 build but are required for other programs.  The apt-get approach is still much cleaner than installing the entire OpenSuse or Fedora DVD, but you do find yourself scrambling the first time you try to install anything to determine what programs are missing.  You don't know you need csh installed until the test scripts fail.  You forget about ssh until you run mpirun for the first time.  I do not yet know if MKL, GOTO, or any of the X11-based AmberTools programs require additional libraries to be installed, so the above apt-get list may grow.  Check the comments section at bottom for updates.

The list below shows essential programs and libraries AND their suggested additional installs. As long as you're online and apt-get'ing anyway, might as well not risk missing something for your next compiling adventure.

g++ g++-multilib  g++-4.3-multilib  gcc-4.3-doc  libstdc++6-4.3-dbg  libstdc++6-4.3-doc

gfortran gfortran-multilib gfortran-doc gfortran-4.3-multilib gfortran-4.3-doc libgfortran3-dbg

autoconf autoconf2.13 autobook autoconf-archive gnu-standards autoconf-doc libtool gettext

flex bison

ssh csh patch markdown fort77 netcdf-bin libblas3gf liblapack3gf libgfortran2

3 – 9 .bashrc and .profile Modifications, Building In Your Own Directory

For a number of programs, the procedure from source is ./configure, make, and make install, make install often only responsible for moving folders into the correct directories (specifically, directories only root has access to, such as /usr/local/bin, /lib, and /opt).  In many distributions, this final step is actually sudo make install.  This division of make and make install is not preserved in Amber, which complicates the Ubuntu build slightly.  The building in $HOME/Documents (as I've described the procedure above) saves you from having to constantly sudo the extraction and building process in directories you, the typical user, do not have access to write into.

Working in $HOME\Documents (or any of your $HOME folders) allows for the complete build and testing of AmberTools and Amber10.

The other benefit from doing as much in $HOME as possible is the lack of a need to define variables as the root user (specifically, AMBERHOME, MPI_HOME, and DO_PARALLEL) by setting variables in the root .bashrc and .profile files, adding lines to the Makefiles for AmberTools and Amber10, or setting variables at prompts.  This variable specification issue arises because when you run a program with sudo, you invoke the root privileges and the root variable definitions, so any specifications you make for your PATH or these Amber-specific variables are lost.

Once the build is complete in the $HOME/Documents folder, we move the entire directory into /opt, my default location for all programs I build from source (but $HOME/Documents is just fine as well once the PATH is set).

10 – 14 OpenMPI 1.3

So, why not then use OpenMPI 1.2.x?  The build of Amber10 works just fine with OpenMPI 1.2.x in Ubuntu with all of the installation specifications described in Step 2 of the short procedure (the extensive apt-get).  The problem with 1.2.x occurs for a single test after the Amber10 build that I've not yet figured out a workaround for, but the error is sinister enough that I decided to not risk similar errors in my own Amber work and, instead, use OpenMPI 1.3.x, which does not suffer the same error.  The only (ONLY) test to fail with OpenMPI 1.2.x is the cnstph (constant pH simulation) test, which runs perfectly well but fails at the close of the calculation (you can test this yourself by changing the nstlim value in the mdin file to any arbitrarily large number).  The failure message, which kills the test series, is below.  This job also fails if you simply try to run it independently of the pre-defined test set (not the make test.parallel.MM, but cd'ing into the cnstph directory, setting variables, and running the script).

==============================================================
cd cnstph && ./Run.cnstph
[ubuntu-desktop:18263] *** Process received signal ***
[ubuntu-desktop:18263] Signal: Segmentation fault (11)
[ubuntu-desktop:18263] Signal code: Address not mapped (1)
[ubuntu-desktop:18263] Failing at address: 0x9069d2cb0
[ubuntu-desktop:18264] *** Process received signal ***
[ubuntu-desktop:18265] *** Process received signal ***
[ubuntu-desktop:18265] Signal: Segmentation fault (11)
[ubuntu-desktop:18265] Signal code: Address not mapped (1)
[ubuntu-desktop:18265] Failing at address: 0x907031c70
[ubuntu-desktop:18264] Signal: Segmentation fault (11)
[ubuntu-desktop:18264] Signal code: Address not mapped (1)
[ubuntu-desktop:18264] Failing at address: 0x906dbdc00
[ubuntu-desktop:18263] [ 0] /lib/libpthread.so.0 [0x2b17b5bef0f0]
[ubuntu-desktop:18263] [ 1] /usr/local/lib/libopen-pal.so.0(_int_free+0x57) [0x2b17b4c10937]
[ubuntu-desktop:18263] [ 2] /usr/local/lib/libopen-pal.so.0(free+0xeb) [0x2b17b4c122bb]
[ubuntu-desktop:18263] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18264] [ 0] /lib/libpthread.so.0 [0x2abd8f5810f0]
[ubuntu-desktop:18264] [ 1] /usr/local/lib/libopen-pal.so.0(_int_free+0x57) [0x2abd8e5a2937]
[ubuntu-desktop:18264] [ 2] /usr/local/lib/libopen-pal.so.0(free+0xeb) [0x2abd8e5a42bb]
[ubuntu-desktop:18264] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18264] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18264] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18264] [ 6] /lib/libc.so.6(__libc_start_main+0xe6) [0x2abd8f7ad466]
[ubuntu-desktop:18264] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18264] *** End of error message ***
[ubuntu-desktop:18263] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18263] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18263] [ 6] /lib/libc.so.6(__libc_start_main+0xe6) [0x2b17b5e1b466]
[ubuntu-desktop:18263] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18263] *** End of error message ***
[ubuntu-desktop:18265] [ 0] /lib/libpthread.so.0 [0x2ba0032600f0]
[ubuntu-desktop:18265] [ 1] /usr/local/lib/libopen-pal.so.0(_int_free+0x57) [0x2ba002281937]
[ubuntu-desktop:18265] [ 2] /usr/local/lib/libopen-pal.so.0(free+0xeb) [0x2ba0022832bb]
[ubuntu-desktop:18265] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18265] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18265] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18265] [ 6] /lib/libc.so.6(__libc_start_main+0xe6) [0x2ba00348c466]
[ubuntu-desktop:18265] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18265] *** End of error message ***
mpirun noticed that job rank 0 with PID 18262 on node ubuntu-desktop exited on signal 11 (Segmentation fault).
3 additional processes aborted (not shown)
./Run.cnstph:  Program error
make[1]: *** [test.sander.GB] Error 1
make[1]: Leaving directory `/home/userid/Documents/amber10/test'
make: *** [test.sander.GB.MPI] Error 2

Errors like this one scare me to no end, especially when the error seems to be in the proper termination of a process (such as writing final positions or data files) and you risk such errors occurring after 2 week simulations with no way to get your data back.  If you decide (if it's already installed, for instance) to use OpenMPI 1.2.x with Amber10 but still want to test the build, I'd suggest simply commenting out (#) the line

# cd cnstph && ./Run.cnstph

from the Makefile in the ../test directory.  I can't imagine this happens in all other distributions, but I also don't know what the problem could be given that all of the other tests work just fine.  That said, there's a failed test for the OpenMPI 1.3.x build as well when you forget to run the tests from csh, but that has nothing to do with OpenMPI (see below).

16 – 19 Building Amber10 and AmberTools At $HOME

As described in the 3 – 9 section above, the problem with not building in your $HOME directory is the passing of variables in the build processes.  For instance, if you set MPI_HOME in your $HOME .bashrc file and then sudo make parallel, the error you'll see is

Starting installation of Amber10 (parallel) at Mon Mar  9 22:30:36 EDT 2009.
cd sander; make parallel
make[1]: Entering directory `/opt/amber10/src/sander'
./checkparconf
cpp -traditional -I/usr/local/include -P -xassembler-with-cpp -Dsecond=ambsecond -DBINTRAJ -DMPI  constants.f > _constants.f
/usr/local/bin/mpif90 -c -O3 -fno-range-check -fno-second-underscore -ffree-form  -o constants.o _constants.f
Cannot open configuration file /usr/share/openmpi/mpif90-wrapper-data.txt
Error parsing data file mpif90: Not found
make[1]: *** [constants.o] Error 243
make[1]: Leaving directory `/opt/amber10/src/sander'
make: *** [parallel] Error 2

because the MPI_HOME variable is not specified for root.  Performing the compilation in $HOME/Documents avoids this issue.

If you want to build Amber and AmberTools in /opt as root for some reason and do not want to deal with modifying the .bashrc and .profile files in /root, you can modify the appropriate Amber files to define the variables needed for both building and testing. For AmberTools testing, you need to define AMBERHOME, which you do at the top of Makefile_at.

cd ../test/

sudo pico Makefile_at

include ../src/config.h

AMBERHOME=/opt/amber10/
export AMBERHOME

test: is_amberhome_defined \

sudo make -f Makefile_at test

For the Amber10 build process, you would need to modify configure_amber at the top of the file to specify the MPI_HOME variable.

sudo pico configure_amber

#!/bin/sh
#set -xv

MPI_HOME=/
export MPI_HOME

command="$0 $*"

For the Amber10 testing process, you would need to assign both AMBERHOME (so the tests know where to look for the executables) and DO_PARALLEL (so the tests know to use OpenMPI) at the top of the file.

cd ../test/

sudo pico Makefile

include ../src/config_amber.h

AMBERHOME=/opt/amber10
export AMBERHOME

DO_PARALLEL=mpirun -np 4
export DO_PARALLEL

SHELL=/bin/sh

It otherwise makes no difference at all how you choose to do things so long as the program gets built.  Much of the Ubuntu literature I've stumbled across attempts to make people avoid doing anything to change the root account settings, which was the approach I chose to use in the "Easy" procedure above.

20 – 21 Patching Amber10 and AmberTools 1.2

No surprises and not necessary for building.  Do it anyway.  And patch is included as one of the apt-get'ed programs.  Simply be cognizant of the naming of the bugfix files (the directories are the same for both Amber10 and AmberTools and the patch is simply applied to the files it finds).

22 – 24 Building AmberTools

Your choices for the AmberTools build are fairly limited.  Via configure_at –help,

Usage: ./configure_at [flags] compiler

where compiler is one of:

gcc, icc, solaris_cc, irix_cc, osf1_cc

If not specified then gcc is used.

Option flags:

-mpi        use MPI for parallelization
-scalapack  use ScaLAPACK for linear algebra (utilizes MPI)
-openmp     Use OpenMP pragmas for parallelization (icc, solaris_cc,gcc(>4.2))
-opteron    options for solaris/opteron
-ultra2     options for solaris/ultra2
-ultra3     options for solaris/ultra3
-ultra4     options for solaris/ultra4
-bit64      64-bit compilation for solaris
-perflib    Use solaris performance library in lieu of LAPACK and BLAS
-cygwin     modifications for cygwin/windows
-p4         use optimizations specific for the Intel Pentium4 processor
-altix      use optimizations specific for the SGI Altix with icc
-static     create statically linked executables
-noX11      Do not build programs that require X11 libraries, e.g. xleap.
-nobintraj  Delete support for binary (netCDF) trajectory files
-nosleap    Do not build sleap, which requires unantiquated compilers.

Environment variables:
MKL_HOME    If present, will link in Intel's MKL libraries (icc,gcc)
GOTO        If present, and MKL_HOME is not set, will use this location
for the Goto BLAS routines

We're building gcc (default) with the above installs.  Running ./configure -noX11 with the apt-get-installed programs above should produce the following output.

Setting AMBERHOME to /home/userid/Documents/amber10

Testing the C compiler:
mpicc  -m64 -o testp testp.c
OK

Obtaining the C++ compiler version:
g++ -v
The version is ../src/configure
4.3.2
[: 520: 3: unexpected operator
OK

Testing the g77 compiler:
g77 -O2 -fno-automatic -finit-local-zero -o testp testp.f
./configure_at: 538: g77: not found
./configure_at: 539: ./testp: not found
Unable to compile a Fortran program using g77 -O2 -fno-automatic -finit-local-zero

Testing the gfortran compiler:
gfortran -O1 -fno-automatic -o testp testp.f
OK

Testing flex:
OK

Configuring netcdf; (may be time-consuming)

NETCDF configure succeeded.

The configuration file, config.h, was successfully created.

The next step is to type 'make -f Makefile_at'

25 – 28 Testing AmberTools

As a quick head's up, if you don't have csh installed, your test will fail at the following step with the following error:

cd ptraj_rmsa && ./Run.rms
/bin/sh: ./Run.rms: not found
make: *** [test.ptraj] Error 127

Otherwise, you can see the output from my test set HERE.  Installing csh is much easier than modifying multiple Run scripts.

That said, the one fix I do perform is to modify the mopac.sh file only slightly in accordance with the post by Mark Williamson on the Vanderbilt University Amber Listserve (the first stumbling block a ran into during testing).

29 – 31 Building Parallel Amber10

Running ./configure_amber -openmpi gfortran should output the following:

Setting AMBERHOME to /home/userid/Documents/amber10

Setting up Amber configuration file for architecture: gfortran
Using parallel communications library: openmpi
The MKL_HOME environment variable is not defined.

Testing the C compiler:
gcc  -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -O2 -m64 -o testp testp.c
OK

Testing the Fortran compiler:
gfortran -O0 -fno-range-check -fno-second-underscore -o testp testp.f
OK

——   Configuring the netCDF libraries:   ——–

Configuring netcdf; (may be time-consuming)
NETCDF configure succeeded.
MPI_HOME is set to /

The configuration file, config_amber.h, was successfully created.

32 – 35 Automatic ssh Login

These step saves you from constantly having to input your password for the mpirun testing phase.  This strictness to password provision is because of ssh (and, because I have machines both online and accessible, I prefer to deal with setting up the ssh side right instead of not having that layer of security).  The first time you run mpirun, ssh will throw back at you the following:

The authenticity of host ‘userid-desktop (127.0.1.1)' can't be established.
RSA key fingerprint is eb:86:24:66:67:0a:7a:7b:44:95:a6:83:d2:a8:68:01.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘terahertz-desktop' (RSA) to the list of known hosts.

Generating the automatic login file for ssh will look like the following:

Generating public/private dsa key pair.
Enter file in which to save the key (/home/userid/.ssh/id_dsa):
Created directory '/home/userid/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/userid/.ssh/id_dsa.
Your public key has been saved in /home/userid/.ssh/id_dsa.pub.
The key fingerprint is:
54:84:68:79:3b:ec:17:41:c9:96:98:10:df:4f:cc:42 userid@userid-desktop
The key's randomart image is:
+–[ DSA 1024]—-+

36 – 40 Testing Amber10

If you're not in csh, either this test fails…

cd rdc && ./Run.dip
if: Badly formed number.
make[1]: *** [test.sander.BASIC] Error 1
make[1]: Leaving directory `/home/userid/Documents/amber10/test'
make: *** [test.sander.BASIC.MPI] Error 2

or this one…

cd pheMTI && ./Run.lambda0
This test must be run in parallel
make: *** [test.sander.TI] Error 1

Again, re-writing scripts is far less fun than sudo apt-get install csh and forgetting about it.

41 – 47 Moving The Built Amber10 and AmberTools

The final sequence of steps moves the $HOME/Documents-built amber10 into /opt (if you choose to), removes the build from your $HOME directory, and resets your PATH and AMBERHOME variables in .bashrc and .profile, thereby completing the build process.

And Finally…

If questions are raised, comments are thought of, speed-ups identified, etc., please either send me an email or post them here.  Our concern as computational chemists should be making predictions and interpreting data, not making compilation errors and interpreting error messages.

en.wikipedia.org/wiki/X_Window_System
en.wikipedia.org/wiki/Quantum_mechanics
ambermd.org
www.open-mpi.org
www.ubuntu.com
www.google.com
www.somewhereville.com
www.somewhereville.com/?p=345
en.wikipedia.org/wiki/Message_Passing_Interface
en.wikipedia.org/wiki/Zappa_in_New_York
help.ubuntu.com/community/RootSudo
www.linux.org
help.ubuntu.com/community/UsingTheTerminal
www.mozilla.com/en-US/firefox
ambermd.org/AmberTools-get.html
www.bzip.org
www.intel.com/cd/software/products/asmo-na/eng/307757.htm
www.tacc.utexas.edu/resources/software/gotoblasfaq.php
ambermd.org/bugfixes.html
en.wikipedia.org/wiki/Symmetric_multiprocessing
en.wikipedia.org/wiki/Fortran
www.g95.org
www.intel.com/cd/software/products/asmo-na/eng/282048.htm
www.mcs.anl.gov/index.php
www.gromacs.org
www.apple.com/macbookpro
www.abinit.org
www.abinit.org/package/?text=5_6_5
www.open-mpi.org/community/lists/users/2008/12/7522.php
www.open-mpi.org/community/lists/users/2008/12/7531.php
www.open-mpi.org/community/lists/users/2008/12/7536.php
www.open-mpi.org/community/lists/users/2008/12/7539.php
www.open-mpi.org/nightly/trunk
www.opensuse.org/en
fedoraproject.org
en.wikipedia.org/wiki/C_shell
en.wikipedia.org/wiki/Secure_Shell
www.open-mpi.org/software/ompi/v1.2