home

Archive for the 'fftw' Category

Compiling LAMMPS (3Apr13, But Likely Others) In Ubuntu 10.04 Part 1. Using MPICH2 And FFTW2 (And Ubuntu Notes On Installing Intel Fortran And C++ Composers XE for Linux)

Saturday, April 6th, 2013

I’ll qualify this post by saying that (1) I have given up on Ubuntu 11.x and 12.x because they are consistently unstable on my hardware (so, if you have issues running this installation on those versions, I may not be of much help (although I suspect things should work)), (2) I am starting this install from a fresh 32-bit Desktop Ubuntu 10.04 install (so do not know if there are any issues with other software one might have installed on a Linux box if a problem comes up), and (3) the procedure comes out of the current lack of an Ubuntu binary currently listed as available (as of 6 April 2013) from the LAMMPS website (lammps.sandia.gov/download.html#ubuntu). If (3) changes and is available in an MPI form, what’s below will hopefully be unnecessary.

Building Trouble And Solutions

My initial “just unzip, untar, and make linux” attempt on a fresh 10.04 install produced the following error (which I’m reproducing in the expectation that you found this page by typing one of the errors below into a search engine, so you’ll find the error and the solutions). NOTE: I build all my programs in /opt for organizational purposes (so replace accordingly):

user@machine:/opt/lammps-3Apr13/src$ sudo make linux

make[1]: Entering directory `/opt/lammps-3Apr13/src/Obj_linux'
icc -O -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../write_restart.cpp > write_restart.d
/bin/sh: icc: not found
icc -O -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../write_data.cpp > write_data.d
/bin/sh: icc: not found
icc -O -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../verlet.cpp > verlet.d
/bin/sh: icc: not found
icc -O -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../velocity.cpp > velocity.d
/bin/sh: icc: not found
...
icc -O -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../angle_charmm.cpp > angle_charmm.d
/bin/sh: icc: not found
icc -O  -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -c ../angle_charmm.cpp
make[1]: icc: Command not found
make[1]: *** [angle_charmm.o] Error 127
make[1]: Leaving directory `/opt/lammps-3Apr13/src/Obj_linux'
make: *** [linux] Error 2

Obviously, problem Number 1 is the lack of the Intel C Compiler (icc). My solution to this was to download the non-commercial version of the Intel C++ Composer XE for Linux (then I grabbed the Intel Fortran Composer XE for Linux because, well, why not?) currently available from software.intel.com/en-us/non-commercial-software-development (which means this link may be subject to change, so search for “intel c++ noncommercial” in the event).

Unzipping, untaring, and running ./install.sh will, on a fresh Ubuntu install, give you errors that g++ and a proper Java runtime environment are not available on the computer (as part of the pre-requisite search). This is easily solved before the Intel installs by the following (one of which is needed for LAMMPS anyway). I specifically chose the openJDK, but Java 6 or 7 should also do.

sudo apt-get install build-essential openjdk-6-*

After these installs, both Intel Composers should install just fine. If you’re installing LAMMPS into a directory that you, the user, has access to, then adding /opt/intel/bin to your PATH will provide you no compiler errors (related to location). If you attempt to install LAMMPS in a directory you, the user, do not have access to, you have to run the install with sudo, which then doesn’t likely have the /opt/intel/bin directory in the root PATH, in which case the solution is simply to add a symbolic link for icc into /user/local/bin.

sudo ln -s /opt/intel/bin/icc /usr/local/bin

With icc accessible, running another sudo make linux produces the following errors:

sudo make clean-all
sudo make linux

make[1]: Entering directory `/opt/lammps-3Apr13/src/Obj_linux'
icc -O -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../write_restart.cpp > write_restart.d
../write_restart.cpp(15): catastrophic error: cannot open source file "mpi.h"
  #include "mpi.h"
                  ^

icc -O -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../write_data.cpp > write_data.d
../write_data.cpp(15): catastrophic error: cannot open source file "mpi.h"
  #include "mpi.h"
                  ^
...

icc -O  -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -c ../angle_charmm.cpp
../pointers.h(25): catastrophic error: cannot open source file "mpi.h"
  #include "mpi.h"
                  ^

compilation aborted for ../angle_charmm.cpp (code 4)
make[1]: *** [angle_charmm.o] Error 4
make[1]: Leaving directory `/opt/lammps-3Apr13/src/Obj_linux'
make: *** [linux] Error 2

The new problem is now an MPI problem, requiring the installation of (as recommended from the LAMMPS linux Makefile) MPICH. Reading the Makefile (you mean you didn’t check this first?) also indicates the need for FFTW2. We can install all of the needed files in one shot with apt-get (and I include build-essential here for completeness, as you either did or didn’t install it while trying to solve the icc problem above).

sudo apt-get install build-essential mpich-bin libmpich1.0-dev mpi-doc fftw2 fftw-dev libxaw7-dev libmpich2-dev

This still does not solve the mpi.h issue above, which is a problem with the settings in Makefile.linux (which, apparently, are not Ubuntu-compatible). The solution is, and this is why this is Part 1, to change Makefile.linux to make it compatible with the mpicc compiler in Ubuntu (I will attempt to get it working with icc next). The tweak to my Makefile.linux is provided below (an amalgam of some other Makefile settings. Comment out or delete what’s there and add the sections without the “#” to the file):

#######################################################################
## My modified settings are below:

CC = mpicxx
CCFLAGS = -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEK
DEPFLAGS =	-M

LINK = $(CC)
LINKFLAGS = -O
SIZE = size

USRLIB = -lfftw

ARCHIVE =	ar
ARFLAGS =	-rc

#######################################################################
## Comment the following out or delete them:

# CC =		icc
# CCFLAGS =	-O
# SHFLAGS =	-fPIC
# DEPFLAGS =	-M

# LINK =		icc
# LINKFLAGS =	-O
# LIB =           -lstdc++
# SIZE =		size

# ARCHIVE =	ar
# ARFLAGS =	-rc
# SHLIBFLAGS =	-shared

I will say that, in some of my searches, I found reference to the use of USRLIB -lmpi. If this is included in the Makefile.linux, you will get the following error upon mpicc (MPICH2) compilation:

/usr/bin/ld: cannot find -lmpl
collect2: ld returned 1 exit status
make[1]: *** [../lmp_linux] Error 1
make[1]: Leaving directory `/opt/lammps-3Apr13/src/Obj_linux'
make: *** [linux] Error 2

So, don’t include -lmpi.

With the above Makefile modifications, building of LAMMPS seems to go quite well until the following error is produced:

mpicxx -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEK  -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -c ../dump_dcd.cpp
mpicxx -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEK  -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -c ../dump_image.cpp
../dump_image.cpp:32:21: error: jpeglib.h: No such file or directory
../dump_image.cpp: In member function ‘virtual int LAMMPS_NS::DumpImage::modify_param(int, char**)':
../dump_image.cpp:904: warning: suggest parentheses around assignment used as truth value
../dump_image.cpp:911: warning: suggest parentheses around assignment used as truth value
../dump_image.cpp:964: warning: suggest parentheses around assignment used as truth value
../dump_image.cpp:971: warning: suggest parentheses around assignment used as truth value
../dump_image.cpp: In member function ‘void LAMMPS_NS::DumpImage::create_image()':
../dump_image.cpp:611: warning: ‘diameter' may be used uninitialized in this function
../dump_image.cpp:612: warning: ‘color' may be used uninitialized in this function
../dump_image.cpp:612: warning: ‘color1' may be used uninitialized in this function
../dump_image.cpp:612: warning: ‘color2' may be used uninitialized in this function
make[1]: *** [dump_image.o] Error 1
make[1]: Leaving directory `/opt/lammps-3Apr13/src/Obj_linux'
make: *** [linux] Error 2

The jpeglib.h error can likely be solved by installing only one of the libraries below, but I didn’t bother to identify which one, instead indiscriminately installing a whole set based on the recommendation of dev.xonotic.org/projects/xonotic/wiki/Repository_Access (having found this link at forums.xonotic.org/showthread.php?tid=1252 – but I removed some of the install that I’m sure was not needed):

sudo apt-get install libxxf86dga-dev libxcb-xf86dri0-dev libxpm-dev libxxf86vm-dev libsdl1.2-dev libsdl-image1.2-dev libclalsadrv-dev libasound2-dev libxext-dev

Finally, a successful build!

make[1]: Entering directory `/opt/lammps-3Apr13/src/Obj_linux'
mpicxx -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEK -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../write_restart.cpp > write_restart.d
mpicxx -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEK -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../write_data.cpp > write_data.d
mpicxx -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEK -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../verlet.cpp > verlet.d
mpicxx -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEK -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../velocity.cpp > velocity.d
mpicxx -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEK -DLAMMPS_GZIP -DLAMMPS_JPEG  -DMPICH_SKIP_MPICXX  -DFFT_FFTW   -M ../variable.cpp > variable.d
…
write_data.o write_restart.o  -lmpich -lpthread -lfftw -ljpeg   -o ../lmp_linux
size ../lmp_linux
   text	   data	    bss	    dec	    hex	filename
3654398	   6928	    264	3661590	 37df16	../lmp_linux
make[1]: Leaving directory `/opt/lammps-3Apr13/src/Obj_linux'

At this point, you can move the lmp_linux executable anywhere (or, as I find myself doing, just keep it in the lammps-DATE/src folder for when you perform subsequent builds).

Running An MPI Calculations

If you’ve never run through an MPI calculation, you’ll get the following errors when you first try to run lmp_linux:

cannot connect to local mpd (/tmp/mpd2.console_user); possible causes:
  1. no mpd is running on this host
  2. an mpd is running but was started without a "console" (-n option)
In case 1, you can start an mpd on this host with:
    mpd &
and you will be able to run jobs just on this host.
For more details on starting mpds on a set of hosts, see
the MPICH2 Installation Guide.

**********

user@machine:/opt/lammps-3Apr13/src$ mpd &

configuration file /home/damianallis/.mpd.conf not found
A file named .mpd.conf file must be present in the user's home
directory (/etc/mpd.conf if root) with read and write access
only for the user, and must contain at least a line with:
MPD_SECRETWORD=
One way to safely create this file is to do the following:
  cd $HOME
  touch .mpd.conf
  chmod 600 .mpd.conf
and then use an editor to insert a line like
  MPD_SECRETWORD=mr45-j9z
into the file.  (Of course use some other secret word than mr45-j9z.)

[1]+  Exit 255                mpd

Simply follow the instructions to put a .mdp.conf in your ~/ folder, start mpd with “mpd &,” and run LAMMPS with the following command below (which uses 2 cores (-np 2)).

NOTE: My LAMMPS test (being brand new to it) was to run the first test available from icme.hpc.msstate.edu/mediawiki/index.php/LAMMPS_Help. Following the instructions therein (DOWNLOAD: Al99.eam.alloy (780 KB) and calc_fcc.in (1 KB)):

sudo mpirun -np 2 ./lmp_linux < calc_fcc.in

Which should produce the following output:

LAMMPS (3 Apr 2013)
Lattice spacing in x,y,z = 4 4 4
Created orthogonal box = (0 0 0) to (4 4 4)
  1 by 1 by 2 MPI processor grid
Lattice spacing in x,y,z = 4 4 4
Created 4 atoms
Replicating atoms ...
  orthogonal box = (0 0 0) to (4 4 4)
  1 by 1 by 2 MPI processor grid
  4 atoms
WARNING: Resetting reneighboring criteria during minimization (../min.cpp:173)
Setting up minimization ...
Memory usage per processor = 2.40372 Mbytes
Step PotEng Lx Ly Lz Press Pxx Pyy Pzz eatoms 
       0   -13.417787            4            4            4     29590.11     29590.11     29590.11     29590.11   -13.417787 
      10   -13.439104         4.04         4.04         4.04    5853.9553    5853.9553    5853.9553    5853.9553   -13.439104 
      14       -13.44         4.05         4.05         4.05     2.726913     2.726913     2.726913     2.726913       -13.44 
Loop time of 0.0287241 on 2 procs for 14 steps with 4 atoms

Minimization stats:
  Stopping criterion = linesearch alpha is zero
  Energy initial, next-to-last, final = 
        -13.4177872966     -13.4399999525     -13.4399999525
  Force two-norm initial, final = 3.54599 0.000335006
  Force max component initial, final = 3.54599 0.000335006
  Final line search alpha, max atom move = 0.0625 2.09379e-05
  Iterations, force evaluations = 14 19

Pair  time (%) = 0.00206506 (7.18931)
Neigh time (%) = 0 (0)
Comm  time (%) = 0.00481999 (16.7803)
Outpt time (%) = 0.000162005 (0.564006)
Other time (%) = 0.021677 (75.4664)

Nlocal:    2 ave 2 max 2 min
Histogram: 2 0 0 0 0 0 0 0 0 0
Nghost:    603 ave 603 max 603 min
Histogram: 2 0 0 0 0 0 0 0 0 0
Neighs:    140 ave 162 max 118 min
Histogram: 1 0 0 0 0 0 0 0 0 1

Total # of neighbors = 280
Ave neighs/atom = 70
Neighbor list builds = 0
Dangerous builds = 0
Total energy (eV) = -13.439999952539944061;
Number of atoms = 4;
Lattice constant (Angstoms) = 4.049999999999998046;
Cohesive energy (eV) = -3.3599999881349860154;
All done!

Amber 11 And AmberTools 1.5 In Ubuntu 10.04 LTS (And Related, Including A How-To For EOL 8.10)

Saturday, July 16th, 2011

Having successfully navigated serial and parallel Amber10 installs under Ubuntu 8.10, I am pleased to report that the process for Amber11 with OpenMPI (from apt-get, one doesn’t have to build from scratch) under Ubuntu 10.10 is seemingly much easier (and have it here so I don’t forget). There is a bit of persnicketiness to the order of the serial and parallel installs that must be kept track of (and I’m building in serial-to-parallel order), but the process is otherwise straightforward.

For organizational purposes, I’m building amber11 in my $HOME directory. This removes some of the PATH issues with sudo-ing aspects of the install (and can be moved into another directory after the build is complete).

1. apt-get Installs

The search for dependent programs and libraries is a long and involved one given how many programs I have installed. Therefore, instead of trying to find all of the amber-dependent installs for successful building, I’m simply providing the list of everything I have on the test machine. As hard drives are cheap and Ubuntu will warn of conflicts, I recommend simply installing the below and accepting the 100 Mb hit to NOT have to find the smallest apt-get set (yes, some of these are obviously not needed).

sudo apt-get install build-essential cmake doxygen freeglut3-dev g++-multilib gcc-multilib gettext gnuplot ia32-libs lib32asound2 lib32gcc1 lib32gcc1-dbg lib32gfortran3 lib32gomp1 lib32mudflap0 lib32ncurses5 lib32nss-mdns lib32z1 libavdevice52 libc6-dev-i386 libc6-i386 libfreeimage-dev libglew1.5-dev libopenal1 libopenexr-dev libpng12-dev libqt4-dev libssl-dev libstdc++6-4.3-dbg libstdc++6-4.3-dev libstdc++6-4.3-doc libxi-dev libxml-simple-perl libxmu-dev mercurial nfs-common nfs-kernel-server portmap python2.6-dev rpm ssh

The above said, there are some obvious most-important installs that have to be there (according to the “official” Ubuntu amber11 install summary at ambermd.org/ubuntu.html). You could try to work with only these first if you were in a diagnostic mood today:

sudo apt-get install bison csh flex fort77 g++ gcc gfortran libbz2-dev libnetcdf-dev libopenmpi-dev libxext-dev libxt-dev openmpi-bin patch tcsh xorg-dev zlib1g-dev

With that, we move onto the AmberTools 1.5 install.

2. AmberTools 1.5 (Serial)

The AmberTools build process deals with PATH specifications for both it and Amber, then walks you through patching and a successful build.

user@machine:~$ tar xjf AmberTools-1.5.tar.bz2 
user@machine:~$ cd amber11/
user@machine:~/amber11$ echo "export AMBERHOME=$PWD" >> ~/.bashrc
user@machine:~/amber11$ echo "export PATH=$PATH:$AMBERHOME/bin" >> ~/.bashrc
user@machine:~/amber11$ source ~/.bashrc
user@machine:~/amber11$ wget http://ambermd.org/bugfixes/AmberTools/1.5/bugfix.all
user@machine:~/amber11$ patch -p0 < bugfix.all
user@machine:~/amber11$ rm bugfix.all
user@machine:~/amber11$ cd AmberTools/src/
user@machine:~/amber11/AmberTools/src$ ./configure gnu
user@machine:~/amber11/AmberTools/src$ make install
user@machine:~/amber11/AmberTools/src$ cd

3. Amber 11 (Serial Install)

For the Amber build, not building the serial version first will produce the following error (which you may or may not be searching against in google presently):

Warning: Deleted feature: PAUSE statement at (1)
cpp -traditional -P  -DBINTRAJ -DMPI    svbksb.f > _svbksb.f
mpif90 -c -O3 -mtune=generic -ffree-form   -o svbksb.o _svbksb.f
cpp -traditional -P  -DBINTRAJ -DMPI    pythag.f > _pythag.f
mpif90 -c -O3 -mtune=generic -ffree-form   -o pythag.o _pythag.f
Error: a serial version of libFpbsa.a must be built before parallel build.
make[2]: *** [libFpbsa.parallel] Error 2
make[2]: Leaving directory `/home/genomebio/amber11/AmberTools/src/pbsa'
make[1]: *** [libpbsa] Error 2
make[1]: Leaving directory `/home/genomebio/amber11/src/sander'
make: *** [parallel] Error 2

The “gnu” is also important, as there appears to be some kind of formatting (fortran-specific) issue with some files in the non-gnu build attempt that produces the following error if you just blindly run a ./configure:

Error: Unclassifiable statement at (1)
constants.f:39.1:

double precision, parameter :: two       = 2.0d0                        
 1
Error: Non-numeric character in statement label at (1)
constants.f:39.1:

double precision, parameter :: two       = 2.0d0                        
 1
Error: Unclassifiable statement at (1)
constants.f:40.1:

double precision, parameter :: three     = 3.0d0                        
 1
Error: Non-numeric character in statement label at (1)
Fatal Error: Error count reached limit of 25.
make[1]: *** [constants.o] Error 1
make[1]: Leaving directory `/home/user/amber11/src/sander'
make: *** [parallel] Error 2

With that, the serial build is below, including bug fixes.

user@machine:~$ tar xfj Amber11.tar.bz2
user@machine:~$ cd $AMBERHOME
user@machine:~/amber11$ wget http://ambermd.org/bugfixes/11.0/bugfix.all
user@machine:~/amber11$ wget http://ambermd.org/bugfixes/11.0/apply_bugfix.x
user@machine:~/amber11$ chmod +x ./apply_bugfix.x
user@machine:~/amber11$ ./apply_bugfix.x bugfix.all
user@machine:~/amber11$ cd AmberTools/src/
user@machine:~/amber11/AmberTools/src$ ./configure gnu
user@machine:~/amber11/AmberTools/src$ cd $AMBERHOME
user@machine:~/amber11$ ./AT15_Amber11.py 
user@machine:~/amber11$ cd src/
user@machine:~/amber11/src$ make serial

4. Amber 11 (Parallel)

Hopefully the serial build ran non-problematically. The parallel install works just as simply provided you run the process in the order below. The key steps are the “make clean,” new ./configure, re-run of ./AT_Amber11.py, and the other “make clean.”

user@machine:~/amber11/src$ cd $AMBERHOME
user@machine:~/amber11$ cd AmberTools/src/
user@machine:~/amber11/AmberTools/src$ make clean
user@machine:~/amber11/AmberTools/src$ ./configure -mpi gnu
user@machine:~/amber11/AmberTools/src$ cd $AMBERHOME
user@machine:~/amber11$ ./AT15_Amber11.py 
user@machine:~/amber11$ cd src/
user@machine:~/amber11/src$ make clean
user@machine:~/amber11/src$ make parallel

5. Amber 11 (Tests)

Finally, testing the install. Nothing specific to be done as far as the code is concerned, simply running the tests.

user@machine:~/amber11/src$ cd ..
user@machine:~/amber11$ cd test/
user@machine:~/amber11/test$ make -f Makefile
user@machine:~/amber11/test$ 

From the out-of-the-box installation above, my test results complete as follows:

365 file comparisons passed
15 file comparisons failed
0 tests experienced errors
Test log file saved as logs/test_amber_serial/2011-07-14_11-19-47.log
Test diffs file saved as logs/test_amber_serial/2011-07-14_11-19-47.diff

The failed tests include those already mentioned by the Amber developers to fail. This list is provided at the end of the AT15_Amber11.py results:

NOTE: Because PBSA has changed since Amber 11 was released, some
tests are known to fail and others are known to quit in error. These
can be safely ignored.

Tests that error: Tests in $AMBERHOME/test/sander_pbsa_frc
   Run.argasp.min    Run.dadt.min      Run.dgdc.min
   Run.lysasp.min    Run.polyALA.min   Run.polyAT.min
   Run.argasp.min    Run.dadt.min      Run.dgdc.min
   Run.lysasp.min    Run.polyALA.min   Run.polyAT.min
   Run.argasp.min    Run.dadt.min      Run.dgdc.min
   Run.lysasp.min    Run.polyALA.min   Run.polyAT.min

Tests that produce possible FAILUREs:
   cd sander_pbsa_ipb2   && ./Run.110D.min
   cd sander_pbsa_lpb    && ./Run.lsolver.min (only some of them fail here)
   cd sander_pbsa_tsr    && ./Run.tsrb.min
   cd sander_pbsa_decres && ./Run.pbsa_decres
   mm_pbsa.pl tests 02, 03, and 05

6. Quick Summary

For ease of copy-and-paste-ing, the command list is below:

apt-get

sudo apt-get install build-essential cmake doxygen freeglut3-dev g++-multilib gcc-multilib gettext gnuplot ia32-libs lib32asound2 lib32gcc1 lib32gcc1-dbg lib32gfortran3 lib32gomp1 lib32mudflap0 lib32ncurses5 lib32nss-mdns lib32z1 libavdevice52 libc6-dev-i386 libc6-i386 libfreeimage-dev libglew1.5-dev libopenal1 libopenexr-dev libpng12-dev libqt4-dev libssl-dev libstdc++6-4.3-dbg libstdc++6-4.3-dev libstdc++6-4.3-doc libxi-dev libxml-simple-perl libxmu-dev mercurial nfs-common nfs-kernel-server portmap python2.6-dev rpm ssh

sudo apt-get install bison csh flex fort77 g++ gcc gfortran libbz2-dev libnetcdf-dev libopenmpi-dev libxext-dev libxt-dev openmpi-bin patch tcsh xorg-dev zlib1g-dev

AmberTools

tar xjf AmberTools-1.5.tar.bz2 
cd amber11/
echo "export AMBERHOME=$PWD" >> ~/.bashrc
echo "export PATH=$PATH:$AMBERHOME/bin" >> ~/.bashrc
source ~/.bashrc
wget http://ambermd.org/bugfixes/AmberTools/1.5/bugfix.all
patch -p0 < bugfix.all
rm bugfix.all
cd AmberTools/src/
./configure gnu
make install
cd

Amber 11 (Serial)

tar xfj Amber11.tar.bz2
cd $AMBERHOME
wget http://ambermd.org/bugfixes/11.0/bugfix.all
wget http://ambermd.org/bugfixes/11.0/apply_bugfix.x
chmod +x ./apply_bugfix.x
./apply_bugfix.x bugfix.all
cd AmberTools/src/
./configure gnu
cd $AMBERHOME
./AT15_Amber11.py 
cd src/
make serial

Amber 11 (Parallel)

cd $AMBERHOME
cd AmberTools/src/
make clean
./configure -mpi gnu
cd $AMBERHOME
./AT15_Amber11.py 
cd src/
make clean
make parallel

Amber Tests

cd ..
cd test/
make -f Makefile

7. And Furthermore…

I tried the above on an old linux box running Intrepid Ibex (8.10), which counts as an End-Of-Life (Obsolete) version. Running all of the apt-get installs will work despite 8.10 not existing in the standard package locations, but you have to make the following addition to /etc/apt/sources.list.

sudo pico /etc/apt/sources.list

And copy-and-paste the following (this all taken from help.ubuntu.com/community/EOLUpgrades/Intrepid):

## EOL upgrade sources.list
# Required
deb http://old-releases.ubuntu.com/ubuntu/ intrepid main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ intrepid-updates main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ intrepid-security main restricted universe multiverse

Compiling Single-Precision And Double-Precision GROMACS 3.3.3 With OpenMPI 1.2.6 Under OSX 10.5 (Leopard)

Thursday, May 1st, 2008

The following is a cookbook for compiling single-precision and double-precision GROMACS 3.3.3 in OSX 10.5 (specifically on a MacBook Pro, but most likely general to all things running 10.5) with OpenMPI 1.2.6. My hope is that this saves someone several hours of unnecessary work trying to overcome an otherwise unknown incompatibility between MPICH2 and GROMACS 3.3.3 in Leopard (10.5).

The content of this page is a result of an unsuccessful several hours of trying to build GROMACS-MPI versions with MPICH2 1.0.6 and 1.0.7. MPICH2 compiles just fine with standard options, including a successful testing of mpd and mpdtrace. GROMACS 3.3.3 compiles just fine with standard options for both single-precision and double-precision. Compiling GROMACS with MPI support yields an mdrun file that fails in the following manner:

[host:XXXX1] *** Process received signal ***
[host:XXXX1] Signal: Segmentation fault (11)
[host:XXXX1] Signal code: Address not mapped (1)
[host:XXXX1] Failing at address: 0xffffff94
[ 1] [0x00000000, 0xffffff94] (FP-)
[host:XXXX2] *** Process received signal ***
[host:XXXX2] Signal: Bus error (10)
[host:XXXX2] Signal code: (2)
[host:XXXX2] Failing at address: 0x2
[host:XXXX1] *** End of error message ***
[ 1] [0x00000000, 0x00000002] (FP-)
[marlin:XXXX2] *** End of error message ***

This problem, it has been discovered, is remedied by using OpenMPI 1.2.6. Not a proper diagnosis of the MPICH2/GROMACS problem, but my primary concern is getting MPI working in order to use both cores on my MacBook Pro to get calculations running, not fixing software bugs to get someone else’s calculations running (when there’s an easier fix, anyway).

Installation Procedure

You’ll need the following three files (relevant versions and links are subject to change!):

fftw-3.1.2, http://www.fftw.org/

According to the website:

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, i.e. the discrete cosine/sine transforms or DCT/DST).

FFTW compiles double-precision libraries by default. You’ll need to build single-precision libraries for the single-precision GROMACS install.

gromacs-3.3.3, http://www.gromacs.org/

The newest version. Compilation of version 3.3.2 leads to various problems under OSX 10.5 (undiagnosed here, but certainly reproducible).

openmpi-1.2.6, http://www.open-mpi.org/

This is the MPICH2 replacement (meaning I’ve used MPICH2 for other programs). Again, I’m not playing favorites on a version-to-version level. On my MacBook Pro running OSX 10.5, OpenMPI does what I need, MPICH2 doesn’t.

GROMACS uses MPI to distribute processes between processors. If you don’t have GROMACS compiled with MPI, you don’t have multi-processor calculations (simple enough). That is to say, the Activity Monitor reads 100% for one processor and residual % for all other processes on the other chip (this was a point of confusion in a few discussions, in case someone wonders why I would bother bringing such a point up).

Associated Notes On Compilation

A few important notes to be aware of for the compilation.

XCode 3.0, http://developer.apple.com/tools/xcode/

If you’ve not installed XCode 3.0 yet, you should in order to get compilers and libraries installed on your machine. XCode is the official programmer’s toolbox under OSX and installs far more than you’ll need (but it never hurts). Note that this is a BIG download (1.1 GB) but is a far less taxing route to getting all of the necessary files installed on your machine than relying on (many, many) individual downloads.

Root Access

Running “make install” will install files into core directories that are inaccessible to typical OSX User accounts. Access to these directories (such as /usr/local and /usr/local/lib) is granted only to the root user in OSX, which is not activated by default (and a good thing, too). To turn on root access, go to:

Macintosh HD > Applications > Utilities > Directory Utility

In Directory Utility, go to

Edit > Enable Root User

At build time, you’ll want to be logged in as the root user (see below).

No Spaces In Folder Names

This goof also took some time to figure out. Regardless of where you start the program build processes from, you need to make sure your directory names have no spaces between words.

>& FILENAME.txt

>& FILENAME.txt is used to direct output to FILENAME.txt, providing a way to follow each step in the process for error checking and the like. For diagnosis purposes, the files from my build are provided here for the GROMACS work. Click on each name to open the log file.

Installation Process

How to follow the text below:

1. Step in the process (one per program, significant activity)

1a. Text that follows the numeric-alphabetical label is what needs to be typed in at the Terminal window prompt. Descriptive/explanatory text follows below each step.

The installation is assumed to be set up as follows: the GROMACS, OpenMPI, and FFTW .tar files (I suspect downloading will automatically unzip them) exist in a directory on your Desktop (say, INSTALL_DIRECTORY_NAME). Here’s the compilation procedure:

1. Open a Terminal (Macintosh HD > Applications > Utilities > Terminal). If you don’t have this application on your Dock, I recommend putting it there. You’ll certainly be getting some use out of it.

2. The Terminal window defaults to your User directory. First we change our user to root (for full access), then cd into the install directory.

2a. su

(put in root password at prompt)

at prompt, type:

2b. csh
2c. cd Desktop/INSTALL_DIRECTORY_NAME

3. untar the GROMACS, OpenMPI, and FFTW .tar downloads (again, I suspect they will have already been unzipped as part of the download). You’ll end up with three new directories in this folder.

3a. tar -xvf fftw-3.1.2.tar
3b. tar -xvf gromacs-3.3.3.tar
3c. tar -xvf openmpi-1.2.6.tar

4. Build FFTW

We’ll compile fftw-3.1.2 first. For the single-precision build, the steps are:

4a. cd fftw-3.1.2
4b. ./configure --enable-float --enable-threads >& configure_fftw_single.txt
4c. make >& make_fftw_single.txt
4d. make install >& make_install_fftw_single.txt
4e. make distclean >& make_distclean_fftw_single.txt
4f. cd ..

This series of steps configures (configure), builds (make), moves (install) and cleans up (distclean). The “>&” directs output to the .txt files for diagnosis (all of which you can leave off). To follow step progress, open a new Terminal window, get to the directory of interest and type:

4g. tail -f FILENAME_OF_INTEREST.txt

To build the double-precision version (FFTW default and what you’re better off building anyway)

4h. cd fftw-3.1.2
4i. ./configure --enable-threads >& configure_fftw_double.txt
4j. make >& make_fftw_double.txt
4k. make install >& install_fftw_double.txt
4l. make distclean >& distclean_fftw_double.txt
4m. cd ..

5. Build OpenMPI

This build could not be easier. This build will place the OpenMPI 1.2.6 executables into /usr/local/openmpi (in the interest of organization).

5a. cd openmpi-1.2.6
5b. ./configure --prefix=/usr/local/openmpi >& configure_openmpi.txt
5c. make >& make_openmpi.txt
5d. make install >& install_openmpi.txt
5e. make distclean >& distclean_openmpi.txt
5f. cd ..

6. Build GROMACS

The following four builds will yield all four possible combinations of GROMACS (single-precision and double-precision) and MPI (without and with). Do you need four builds of GROMACS? Definitely not. If you’re reading this page, you’re probably interested only in double-precision and MPI (the last of the four). The four builds listed below were performed for testing purposes only. I only ever use the double-precision MPI version when I’m doing my own work.

Note: Each build included the generation of links to executables that go into /usr/local/bin. Accordingly, all four versions are accessible from the Terminal window without having to specify directories (and each version has individually named files as defined in –program-suffix= ).

GROMACS Single-Precision

This will install a single-precision GROMACS build into /usr/local/gromacs333_single with no specific file suffix.

6a. cd gromacs-3.3.3
6b. ./configure --prefix=/usr/local/gromacs333_single >& gromacs_single_configure.txt
6c. make >& gromacs_single_make.txt
6d. make install >& gromacs_single_install.txt
6e. make links >& gromacs_single_links.txt
6f. make distclean >& gromacs_single_distclean.txt
6g. cd ..

GROMACS Single-Precision With OpenMPI

This will install a single-precision MPI-GROMACS build into /usr/local/gromacs333_single_mpi with the file suffix _mpi.

6h. cd gromacs-3.3.3
6i. ./configure --enable-mpi --program-suffix=_mpi --prefix=/usr/local/gromacs333_single_mpi
>& gromacs_single_mpi_configure.txt
6j. make >& gromacs_single_mpi_make.txt
6k. make install >& gromacs_single_mpi_install.txt
6l. make links >& gromacs_single_mpi_links.txt
6m. make distclean >& gromacs_single_mpi_distclean.txt
6n. cd ..

GROMACS Double-Precision

This will install a double-precision GROMACS build into /usr/local/gromacs333_double with the file suffix _d.

6o. cd gromacs-3.3.3
6p. ./configure --program-suffix=_d --prefix=/usr/local/gromacs333_double --enable-double
>& gromacs_double_configure.txt
6q. make >& gromacs_double_make.txt
6r. make install >& gromacs_double_install.txt
6s. make links >& gromacs_double_links.txt
6t. make distclean >& gromacs_double_distclean.txt
6u. cd ..

GROMACS Double-Precision With OpenMPI

This will install a double-precision MPI-GROMACS build into /usr/local/gromacs333_double_mpi with the file suffix _mpi_d.

6v. cd gromacs-3.3.3
6w. ./configure --enable-mpi --program-suffix=_mpi_d --prefix=/usr/local/gromacs333_double_mpi
--enable-double >& gromacs_double_mpi_configure.txt
6x. make >& gromacs_double_mpi_make.txt
6y. make install >& gromacs_double_mpi_install.txt
6z. make links >& gromacs_double_mpi_links.txt
6aa. make distclean >& gromacs_double_distclean_mpi.txt
6ab. cd ..

Running MPI-GROMACS Calculations (Non-specific)

Running MPI-GROMACS is straightforward, requiring only one additional step and a few changes to how you use grompp and mdrun.

a. mpd &
b. grompp_mpi_d -np 2

-np 2 is added to tell grompp that two processors are being used. mdrun expects the same number of processors as was specified in grompp.

c. mpirun -np 2 /usr/local/gromacs333_double_mpi/bin/mdrun_mpi_d -np 2

mpirun first calls the number of processors to use (-np 2) and then requires specification of the path to the mdrun executable (/usr/local/gromacs333_double_mpi). mdrun requires specifying the number of processors as well (-np 2) as per the specification in grompp.

And Finally…

Like all first draft cookbooks, this might be missing some important spices. If, by any chance, you found this page and have recommendations to improve speed, stability, organization, etc., please drop a line and I’ll keep track of modifications.

en.wikipedia.org/wiki/Single_precision
en.wikipedia.org/wiki/Double_precision
www.gromacs.org
www.apple.com/macosx/
en.wikipedia.org/wiki/Mac_OS_X_v10.5
www.apple.com/macbookpro/
www.open-mpi.org/
www.mcs.anl.gov/research/projects/mpich2/
www.gromacs.org/documentation/reference/online/mdrun.html
www-unix.mcs.anl.gov/mpi/
www.fftw.org
en.wikipedia.org/wiki/Activity_monitor
developer.apple.com/tools/xcode/
en.wikipedia.org/wiki/NetInfo_Manager
en.wikipedia.org/wiki/Terminal_(application)
en.wikipedia.org/wiki/Dock_(computing)
en.wikipedia.org/wiki/Tar_file
www.gromacs.org/documentation/reference/online/grompp.html

Obligatory

  • CNYO

  • Sol. Sys. Amb.

  • Salt City Miners

  • Ubuntu 4 Nano

  • NMT Review

  • N-Fact. Collab.

  • T R P Nanosys

  • Nano Gallery

  • nano gallery
  • Aerial Photos

    More @ flickr.com

    Syracuse Scenes

    More @ flickr.com