home

Archive for the 'abinit' Category

Abinit 6.8.1 In Parallel With OpenMPI 1.4.1 In Ubuntu 10.04.2 LTS (And Related)

Monday, July 18th, 2011

It has been a banner week for Ubuntu installations.

The installation of Abinit 5.6.5 with OpenMPI 1.3.1 (previously reported at www.somewhereville.com/?p=384) wasn’t bad, but several games had to be played (at the time) to make everything compile and run correctly. I’m pleased to report that Abinit 6.8.1 and OpenMPI 1.4.1 seem to play better together, this simplified considerably over the previous installation guide by the use of the apt-get version of OpenMPI 1.4.1. A bit of option calling in the configure step is needed (and the errors for not doing it are included below).

0. For the Antsy Copy+Paste Crowd

Commands below.

sudo apt-get install autoconf bison build-essential cmake csh doxygen flex fort77 freeglut3-dev g++ g++-multilib gcc gcc-multilib gettext gfortran gnuplot ia32-libs lib32asound2 lib32gcc1 lib32gcc1-dbg lib32gfortran3 lib32gomp1 lib32mudflap0 lib32ncurses5 lib32nss-mdns lib32z1 libavdevice52 libblas3gf libbz2-dev libc6-dev-i386 libc6-i386 libfreeimage-dev libglew1.5-dev liblapack3gf libnetcdf-dev libopenal1 libopenexr-dev libopenmpi-dev libpng12-dev libqt4-dev libssl-dev libstdc++6-4.3-dbg libstdc++6-4.3-dev libstdc++6-4.3-doc libxext-dev libxi-dev libxml-simple-perl libxmu-dev libxt-dev mercurial netcdf-bin nfs-common nfs-kernel-server openmpi-bin patch portmap python2.6-dev rpm ssh tcsh xorg-dev zlib1g-dev libblas-doc libblas-dev liblapack-doc liblapack-dev

sudo aptitude update

sudo aptitude upgrade

sudo shutdown -r now

pico .profile

INSERT: LD_LIBRARY_PATH="/usr/lib64/openmpi/lib/"

source .profile

gunzip abinit-6.8.1.tar.gz 

tar xvf abinit-6.8.1.tar 

cd abinit-6.8.1/

./configure --enable-mpi --with-mpi-level=2 FC=mpif90 --with-linalg-libs="-llapack -lblas"

make mj4

sudo make install

For the “why?” crowd, a bit more about the steps is provided below.

1. apt-get Installs

As has been standard procedure for all of my posts like this, I blindly install a “core set” of things I’ve needed for various other build processes and am largely uninterested in identifying specific things needed by each program I build (hard drives are cheap). This list below includes plenty you likely don’t need for Abinit installations. Feel free to send me a shortened list if you diagnose it.

The list below includes the OpenMPI installation and all of the build tools needed by Abinit. I finish the installation with an aptitude update/upgrade (which one can skip). If you’re using a 32-bit version of Ubuntu, you’ll get plenty of errors about not being able to find various lib32 files. Simply delete the “lib32___” stuff.

user@machine:~/abinit-6.8.1$ sudo apt-get install autoconf bison build-essential cmake csh doxygen flex fort77 freeglut3-dev g++ g++-multilib gcc gcc-multilib gettext gfortran gnuplot ia32-libs lib32asound2 lib32gcc1 lib32gcc1-dbg lib32gfortran3 lib32gomp1 lib32mudflap0 lib32ncurses5 lib32nss-mdns lib32z1 libavdevice52 libblas3gf libbz2-dev libc6-dev-i386 libc6-i386 libfreeimage-dev libglew1.5-dev liblapack3gf libnetcdf-dev libopenal1 libopenexr-dev libopenmpi-dev libpng12-dev libqt4-dev libssl-dev libstdc++6-4.3-dbg libstdc++6-4.3-dev libstdc++6-4.3-doc libxext-dev libxi-dev libxml-simple-perl libxmu-dev libxt-dev mercurial netcdf-bin nfs-common nfs-kernel-server openmpi-bin patch portmap python2.6-dev rpm ssh tcsh xorg-dev zlib1g-dev libblas-doc libblas-dev liblapack-doc liblapack-dev
user@machine:~$ sudo aptitude update
user@machine:~$ sudo aptitude upgrade
user@machine:~$ sudo shutdown -r now

2. LD_LIBRARY_PATH Addition To .profile

If you’re using a 32-bit Ubuntu install, change the “lib64″ to simply “lib”.

user@machine:~$ pico .profile

Copy and paste the following into your .profile.

LD_LIBRARY_PATH="/usr/lib64/openmpi/lib/"

Crtl-X and Yes.

user@machine:~$ source .profile

3. Installing Abinit 6.8.1

And the fun begins.

user@machine:~$ gunzip abinit-6.8.1.tar.gz 
user@machine:~$ tar xvf abinit-6.8.1.tar 
user@machine:~$ cd abinit-6.8.1/

NOTE: I performed several problematic trials before a successful parallel installation. If you saw the following from previous attempts…

configure: creating ./config.status
config.status: error: cannot find input file: `config.dump.in'

… don’t diagnose, just delete the current directory and re-extract a fresh copy of the abinit source.

My first simple ./configure run produced the following output:

DO NOT USE: user@machine:~/abinit-6.8.1$ ./configure

...

Summary of important options:

  * C compiler      : gnu version 4.4
  * Fortran compiler: gnu version 4.4
  * architecture    : unknown unknown (64 bits)

  * debugging       : basic
  * optimizations   : standard

  * MPI    enabled  : no
  * MPI-IO enabled  : no
  * GPU    enabled  : no (none)

  * TRIO   flavor = netcdf+etsf_io-fallback
  * TIMER  flavor = abinit (libs: ignored)
  * LINALG flavor = netlib-fallback (libs: internal)
  * FFT    flavor = none (libs: ignored)
  * MATH   flavor = none (libs: ignored)
  * DFT    flavor = libxc-fallback+atompaw-fallback+bigdft-fallback+wannier90-fallback

Configuration complete.
You may now type "make" to build ABINIT.
(or, on a SMP machine, "make mj4", or "make multi multi_nprocs=")

Note the lack of MPI-enabling. Changing the options in configure to announce-enable MPI and specify the compiler location…

DO NOT USE: user@machine:~/abinit-6.8.1$ ./configure --enable-mpi --with-mpi-prefix="/usr/bin"
 
Summary of important options:

  * C compiler      : gnu version 4.4
  * Fortran compiler: gnu version 4.4
  * architecture    : unknown unknown (64 bits)

  * debugging       : basic
  * optimizations   : standard

  * MPI    enabled  : yes
  * MPI-IO enabled  : yes
  * GPU    enabled  : no (none)

  * TRIO   flavor = netcdf+etsf_io-fallback
  * TIMER  flavor = abinit (libs: ignored)
  * LINALG flavor = netlib-fallback (libs: internal)
  * FFT    flavor = none (libs: ignored)
  * MATH   flavor = none (libs: ignored)
  * DFT    flavor = libxc-fallback+atompaw-fallback+bigdft-fallback+wannier90-fallback

Configuration complete.
You may now type "make" to build ABINIT.
(or, on a SMP machine, "make mj4", or "make multi multi_nprocs=")

… did show MPI enabled.

The thrill of ./configure victory was short-lived. My first attempt to make ended as follows:

DO NOT USE: user@machine:~/abinit-6.8.1$ make mj4

...

Error: Symbol 'xmpi_offset_kind' at (1) has no IMPLICIT type
m_xmpi.F90:1875.25:

 integer(XMPI_OFFSET_KIND),intent(inout) :: offset
                         1
Error: Symbol 'xmpi_offset_kind' at (1) has no IMPLICIT type
m_xmpi.F90:2002.25:

 integer(XMPI_OFFSET_KIND),intent(out) :: my_offpad
                         1
Error: Symbol 'xmpi_offset_kind' at (1) has no IMPLICIT type
m_xmpi.F90:2244.25:

 integer(XMPI_OFFSET_KIND),intent(in) :: offset
                         1
Error: Symbol 'xmpi_offset_kind' at (1) has no IMPLICIT type
Fatal Error: Error count reached limit of 25.
make[5]: *** [m_xmpi.o] Error 1
make[5]: Leaving directory `/home/user/abinit-6.8.1/src/12_hide_mpi'
make[4]: *** [all-recursive] Error 1
make[4]: Leaving directory `/home/user/abinit-6.8.1/src'
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory `/home/user/abinit-6.8.1'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/home/user/abinit-6.8.1'
make[1]: *** [multi] Error 2
make[1]: Leaving directory `/home/user/abinit-6.8.1'
make: *** [mj4] Error 2

As usual, I put the error here because you’re likely searching against the same errors in your build attempts. Your first solution might be to try to force specify the fortran compiler (especially if you’re familiar with the integer(XMPI_OFFSET_KIND) errors, which appeared to me at first blush to be something specific to the GNU fortran compiler choice). As the code’s in f90, calling f77 won’t do you much good…

DO NOT USE: user@machine:~/abinit-6.8.1$ ./configure --enable-mpi="yes" FC=f77

...

checking whether f77 accepts -g... yes
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether f77 accepts -g... yes
configure: setting up Fortran 90
checking for Fortran flag to compile .f90 files... unknown
configure: error: Fortran could not compile .f90 files
make[3]: *** [configure-stamp] Error 1
make[3]: Leaving directory `/home/user/abinit-6.8.1/plugins/netcdf'
make[2]: *** [package-ready] Error 2
make[2]: Leaving directory `/home/user/abinit-6.8.1/plugins/netcdf'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/user/abinit-6.8.1/plugins'
make: *** [multi] Error 2

Specifying gfortran might be your next solution,

DO NOT USE: user@machine:~/abinit-6.8.1$ ./configure --enable-mpi="yes" FC=gfortran --with-mpi-prefix=1

...

configure: Initializing MPI support
configure: looking for MPI in 1
configure: error: use --with-mpi-prefix or set FC, not both

And you may reach your moment of clarity. Why are you calling the non-mpi fortran compiler? Right! The BLAS and LAPACK are late additions (may or may not improve run speed).

user@machine:~/abinit-6.8.1$ ./configure --enable-mpi --with-mpi-level=2 FC=mpif90 --with-linalg-libs="-llapack -lblas"

 ...

 ==============================================================================
 === Overall startup                                                        ===
 ==============================================================================

checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make sets $(MAKE)... (cached) yes
checking whether ln -s works... yes
checking for a sed that does not truncate output... /bin/sed
checking for gawk... (cached) gawk
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
configure: not loading options (no config file available)

 ==============================================================================
 === Build-system information                                               ===
 ==============================================================================

configure: ABINIT version 6.8.1
configure: M4 010416 - Autoconf 026800 - Automake 011100 - Libtool 020204
configure: reporting user interface changes:

 ==============================================================================
 === Option consistency checking                                            ===
 ==============================================================================

configure: checking consistency of library-related options
configure:  |---> all OK
configure: 
configure: checking consistency of plug-in options
configure:  |---> all OK
configure: 
configure: checking consistency of experimental options
configure:  |---> all OK
configure: 
configure:  |---> all OK
configure: 
configure: parsing command-line options

 ==============================================================================
 === Connector startup                                                      ===
 ==============================================================================

configure: Initializing MPI support
checking for mpirun... mpirun
configure: WARNING: MPI runner mpirun may be incompatible with MPI compilers
configure: compiler checks deferred
configure: GPU support disabled from command-line

 ==============================================================================
 === Utilities                                                              ===
 ==============================================================================

checking for sh... /bin/sh
checking for mv... /bin/mv
checking for perl... /usr/bin/perl
checking for rm... /bin/rm
checking for dvips... dvips
checking for dvipdf... dvipdf
checking for latex... no
checking for markdown... no
checking for patch... patch
checking for ps2pdf... ps2pdf
checking for tar... tar
checking for wget... wget
checking for curl... no
configure: using internal version of MarkDown

 ==============================================================================
 === C support                                                              ===
 ==============================================================================

checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking which type of compiler we have... gnu 4.4
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking whether byte ordering is bigendian... no

 ==============================================================================
 === C++ support                                                            ===
 ==============================================================================

checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking which type of C++ compiler we have... gnu 4.4

 ==============================================================================
 === Fortran support                                                        ===
 ==============================================================================

checking for mpif90... /usr/bin/mpif90
checking whether we are using the GNU Fortran compiler... yes
checking whether mpif90 accepts -g... yes
checking which type of Fortran compiler we have... gnu 4.4
checking fortran 90 modules extension... mod
checking for Fortran flag to compile .F90 files... none
configure: determining Fortran module case
checking whether Fortran modules are upper-case... no
checking how to get verbose linking output from mpif90... -v
checking for Fortran libraries of mpif90...  -L/usr/lib/openmpi/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.4.3 -L/usr/lib/gcc/x86_64-linux-gnu/4.4.3/../../../../lib -L/lib/../lib -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/4.4.3/../../.. -L/usr/lib/x86_64-linux-gnu -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl -lnsl -lutil -lgfortranbegin -lgfortran -lm -lpthread
checking for dummy main to link with Fortran libraries... none
checking for Fortran name-mangling scheme... lower case, underscore, no extra underscore

 ==============================================================================
 === Python support                                                         ===
 ==============================================================================

checking for python... python
checking for Python CPPFLAGS... -I/usr/include/python2.6
checking for bzr... no
checking for Python NumPy headers... not found
checking numarray/arrayobject.h usability... no
checking numarray/arrayobject.h presence... no
checking for numarray/arrayobject.h... no

 ==============================================================================
 === Libraries and linking                                                  ===
 ==============================================================================

checking for ar... ar
checking for ranlib... ranlib

 ==============================================================================
 === Hints                                                                  ===
 ==============================================================================

checking for cpp... cpp
checking for a true C preprocessor... cpp
checking which cpp hints to apply... default/default/default
checking which cc hints to apply... gnu/default/default
checking which xpp hints to apply... none/none/none
checking which cxx hints to apply... gnu/default/default
checking which fpp hints to apply... default/default/default
checking which fc hints to apply... gnu/default/default
checking which ar hints to apply... none/none/none
checking which Fortran preprocessor to use... 
checking which Fortran preprocessor flags to apply... 
checking whether to wrap Fortran compiler calls... no

 ==============================================================================
 === Debugging                                                              ===
 ==============================================================================

checking debugging status... enabled (profile mode: basic)
configure: setting C debug flags to '-g'
configure: setting C++ debug flags to '-g'
configure: setting Fortran debug flags to '-g'
checking whether to activate debug mode in source files... no
checking which cc debug flags to apply... gnu/default/default
checking which cxx debug flags to apply... none/none/none
checking which fc debug flags to apply... gnu/default/default
checking whether to activate design-by-contract debugging... no

 ==============================================================================
 === Optimizations                                                          ===
 ==============================================================================

checking optimization status... enabled (profile mode: standard)
checking which cc optimizations to apply... gnu/default/default
checking which cxx optimizations to apply... gnu/default/default
checking which fc optimizations to apply... gnu/default/default
checking whether to apply per-directory optimizations... yes

 ==============================================================================
 === 64-bit support                                                         ===
 ==============================================================================

checking for a 64-bit architecture... yes
checking whether to use 64-bit flags... no
checking for user-defined 64-bit flags... 

 ==============================================================================
 === Build flags                                                            ===
 ==============================================================================

configure: static builds may be performed

 ==============================================================================
 === Advanced compiler features                                             ===
 ==============================================================================

checking stddef.h usability... yes
checking stddef.h presence... yes
checking for stddef.h... yes
checking stdarg.h usability... yes
checking stdarg.h presence... yes
checking for stdarg.h... yes
checking stdio.h usability... yes
checking stdio.h presence... yes
checking for stdio.h... yes
checking malloc.h usability... yes
checking malloc.h presence... yes
checking for malloc.h... yes
checking math.h usability... yes
checking math.h presence... yes
checking for math.h... yes
checking termios.h usability... yes
checking termios.h presence... yes
checking for termios.h... yes
checking errno.h usability... yes
checking errno.h presence... yes
checking for errno.h... yes
checking mcheck.h usability... yes
checking mcheck.h presence... yes
checking for mcheck.h... yes
checking for abort... yes
checking size of char... 1
checking size of short... 2
checking size of int... 4
checking size of long... 8
checking size of long long... 8
checking size of unsigned int... 4
checking size of unsigned long... 8
checking size of unsigned long long... 8
checking size of float... 4
checking size of double... 8
checking size of long double... 16
checking size of size_t... 8
checking size of ptrdiff_t... 8
checking for an ANSI C-conforming const... yes
checking for size_t... yes
checking whether the Fortran compiler supports allocatable arrays in datatypes... yes
checking whether the Fortran compiler provides the iso_c_binding module... yes
checking whether the Fortran compiler accepts exit()... yes
checking whether the Fortran compiler accepts flush()... yes
checking whether the Fortran compiler accepts flush_()... no
checking whether the Fortran compiler accepts gamma()... yes
checking whether the Fortran compiler accepts getenv()... yes
checking whether the Fortran compiler accepts getpid()... no
checking whether the Fortran compiler accepts the null() intrinsic... yes
checking whether the Fortran compiler accepts quadruple integers... yes
checking whether the Fortran compiler accepts long lines... yes
checking whether the Fortran compiler supports stream IO... yes
checking whether the Fortran compiler accepts etime()... no
checking whether to use C clock for timings... no

 ==============================================================================
 === Connectors / Fallbacks                                                 ===
 ==============================================================================

checking whether the C compiler supports MPI... no
checking whether the C++ compiler supports MPI... no
checking whether the Fortran Compiler supports MPI... yes
checking whether MPI is usable... no
configure: WARNING: MPI support is broken!
configure: enabling MPI I/O support
checking whether to build MPI code... yes
checking whether to build MPI I/O code... yes
checking whether to build MPI time tracing code... no
checking which level of MPI is supported by the Fortran compiler... 2
configure: forcing MPI-2 standard support
checking whether the MPI library supports MPI_CREATE_TYPE_STRUCT... yes
checking whether to activate GPU support... no
checking for the requested transferable I/O support... netcdf+etsf_io
checking netcdf.h usability... yes
checking netcdf.h presence... yes
checking for netcdf.h... yes
checking for library containing nc_open... -lnetcdf
checking for library containing nf_open... -lnetcdff
checking for Fortran module includes... -I/usr/include
checking whether NetCDF Fortran wrappers work... yes
checking whether NetCDF supports MPI I/O... no
checking for ETSF_IO libraries to try... -letsf_io_utils -letsf_io
checking for Fortran module includes... -I/usr/include (cached)
checking whether ETSF_IO libraries work... no
configure: WARNING: falling back to internal etsf_io version
checking for the actual transferable I/O support... netcdf+etsf_io-fallback
checking for the requested timer support... abinit
checking for the actual timer support... abinit
checking for the requested linear algebra support... netlib
checking for BLAS support in specified libraries... no
checking for LAPACK support in specified libraries... no
checking for BLACS support in specified libraries... no
checking for ScaLAPACK support in specified libraries... no
checking whether we have a serial linear algebra support... no
configure: WARNING: falling back to internal linear algebra libraries
checking whether we have a parallel linear algebra support... no
checking for the actual linear algebra support... netlib-fallback
checking for the requested math support... none
checking for the actual math support... none
checking for the requested FFT support... none
checking for the actual FFT support... none
checking for the requested DFT support... atompaw+bigdft+libxc+wannier90
checking xc.h usability... no
checking xc.h presence... no
checking for xc.h... no
checking xc_funcs.h usability... no
checking xc_funcs.h presence... no
checking for xc_funcs.h... no
checking for library containing xc_func_init... no
checking for Fortran module includes... -I/usr/include (cached)
configure: WARNING: falling back to internal libxc version
configure: WARNING: AtomPAW recommends missing LibXC support
configure: WARNING: BigDFT requires missing linear algebra support
configure: WARNING: falling back to internal atompaw version
configure: WARNING: BigDFT requires missing LibXC support
configure: WARNING: BigDFT requires missing linear algebra support
configure: WARNING: falling back to internal bigdft version
configure: WARNING: wannier90 requires missing linear algebra support
configure: WARNING: falling back to internal wannier90 version
checking for the actual DFT support... libxc-fallback+atompaw-fallback+bigdft-fallback+wannier90-fallback
configure: using former plugins as a temporary workaround
configure: fallbacks to enable => atompaw bigdft etsf_io libxc linalg wannier90
checking whether to build atompaw... yes
checking whether to build bigdft... yes
checking whether to build etsf_io... yes
checking whether to build fox... no
checking whether to build libxc... yes
checking whether to build linalg... yes
checking whether to build netcdf... no
checking whether to build wannier90... yes
configure: using tarball repository /home/quantum/.abinit/tarballs
checking for a source tarball of LINALG... yes
checking for md5sum... md5sum
configure: tarball MD5 check succeeded
configure: applying LINALG tricks (vendor: gnu, version: 4.4)
checking whether to enable the LINALG fallback... yes
checking whether to build the LINALG fallback... yes
checking whether to enable the FOX fallback... no
checking whether to build the FOX fallback... no
checking whether to enable the NETCDF fallback... no
checking whether to build the NETCDF fallback... no
checking for a source tarball of ETSF_IO... yes
configure: tarball MD5 check succeeded
configure: applying ETSF_IO tricks (vendor: gnu, version: 4.4)
checking whether to enable the ETSF_IO fallback... yes
checking whether to build the ETSF_IO fallback... yes
checking for a source tarball of LIBXC... yes
configure: tarball MD5 check succeeded
configure: applying LIBXC tricks
checking whether to enable the LIBXC fallback... yes
checking whether to build the LIBXC fallback... yes
checking for a source tarball of ATOMPAW... yes
configure: tarball MD5 check succeeded
configure: applying AtomPAW tricks (vendor: gnu, version: 4.4)
checking whether to enable the ATOMPAW fallback... yes
checking whether to build the ATOMPAW fallback... yes
checking for a source tarball of BIGDFT... yes
configure: tarball MD5 check succeeded
configure: applying BigDFT tricks (vendor: gnu, version: 4.4)
checking whether to enable the BIGDFT fallback... yes
checking whether to build the BIGDFT fallback... yes
checking for a source tarball of WANNIER90... yes
configure: tarball MD5 check succeeded
configure: applying Wannier90 tricks (vendor: gnu, version: 4.4)
checking whether to enable the WANNIER90 fallback... yes
checking whether to build the WANNIER90 fallback... yes

 ==============================================================================
 === Nightly builds                                                         ===
 ==============================================================================

checking whether to build test timeout code... no
checking timeout for automatic tests... none

 ==============================================================================
 === Experimental developments                                              ===
 ==============================================================================

checking whether to enable bindings... no
checking whether to enable BSE unpacking... no
checking whether to enable CLib... no
checking whether to build exports... no
checking whether to accelerate 'make check'... no
checking whether to enable GW cut-off... no
checking whether to enable GW double-precision calculations... no
checking whether to enable optimal GW... no
checking whether to enable GW wrapper... no
checking whether to activate maintainer checks... no
checking whether to use macroave... yes
checking whether to reduce 'make check' for packaging... no
checking whether to read input from stdin... yes
checking whether to activate Symmetric Multi-Processing... no
checking whether to activate ZDOTC and ZDOTU workaround... no

 ==============================================================================
 === Subsystems                                                             ===
 ==============================================================================

configure: the Abinit GUI will never be built

 ==============================================================================
 === Output                                                                 ===
 ==============================================================================

configure: creating ./config.status
config.status: creating config.dump
config.status: creating config.mk
config.status: creating config.pc
config.status: creating config.sh
config.status: creating config/wrappers/wrap-fc
config.status: creating src/incs/Makefile
config.status: creating src/mods/Makefile
config.status: creating src/16_hideleave/m_build_info.F90
config.status: creating tests/tests.env
config.status: creating tests/tests-install.env
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating src/01_gsl_ext/Makefile
config.status: creating src/01_interfaces_ext/Makefile
config.status: creating src/01_macroavnew_ext/Makefile
config.status: creating src/01_qespresso_ext/Makefile
config.status: creating src/02_clib/Makefile
config.status: creating src/10_defs/Makefile
config.status: creating src/12_hide_mpi/Makefile
config.status: creating src/14_hidewrite/Makefile
config.status: creating src/15_gpu_toolbox/Makefile
config.status: creating src/16_hideleave/Makefile
config.status: creating src/18_memory_mpi/Makefile
config.status: creating src/18_timing/Makefile
config.status: creating src/27_toolbox_oop/Makefile
config.status: creating src/28_numeric_noabirule/Makefile
config.status: creating src/32_contract/Makefile
config.status: creating src/32_util/Makefile
config.status: creating src/42_geometry/Makefile
config.status: creating src/42_nlstrain/Makefile
config.status: creating src/42_parser/Makefile
config.status: creating src/43_ptgroups/Makefile
config.status: creating src/44_geomoptim/Makefile
config.status: creating src/45_psp_parser/Makefile
config.status: creating src/47_xml/Makefile
config.status: creating src/49_gw_toolbox_oop/Makefile
config.status: creating src/50_abitypes_defs/Makefile
config.status: creating src/51_manage_cuda/Makefile
config.status: creating src/51_manage_mpi/Makefile
config.status: creating src/52_fft_mpi_noabirule/Makefile
config.status: creating src/53_abiutil/Makefile
config.status: creating src/53_ffts/Makefile
config.status: creating src/53_spacepar/Makefile
config.status: creating src/56_mixing/Makefile
config.status: creating src/56_recipspace/Makefile
config.status: creating src/56_xc/Makefile
config.status: creating src/57_iovars/Makefile
config.status: creating src/59_io_mpi/Makefile
config.status: creating src/61_ionetcdf/Makefile
config.status: creating src/62_cg_noabirule/Makefile
config.status: creating src/62_iowfdenpot/Makefile
config.status: creating src/62_occeig/Makefile
config.status: creating src/62_poisson/Makefile
config.status: creating src/62_wvl_wfs/Makefile
config.status: creating src/63_bader/Makefile
config.status: creating src/64_atompaw/Makefile
config.status: creating src/65_nonlocal/Makefile
config.status: creating src/65_psp/Makefile
config.status: creating src/66_paw/Makefile
config.status: creating src/66_wfs/Makefile
config.status: creating src/67_common/Makefile
config.status: creating src/68_dmft/Makefile
config.status: creating src/68_recursion/Makefile
config.status: creating src/68_rsprc/Makefile
config.status: creating src/69_wfdesc/Makefile
config.status: creating src/70_gw/Makefile
config.status: creating src/71_bse/Makefile
config.status: creating src/72_response/Makefile
config.status: creating src/77_ddb/Makefile
config.status: creating src/77_lwf/Makefile
config.status: creating src/77_suscep/Makefile
config.status: creating src/79_seqpar_mpi/Makefile
config.status: creating src/83_cut3d/Makefile
config.status: creating src/93_rdm/Makefile
config.status: creating src/95_drive/Makefile
config.status: creating src/98_main/Makefile
config.status: creating src/libs/Makefile
config.status: creating tests/Nightly/Makefile
config.status: creating plugins/Makefile
config.status: creating plugins/atompaw/Makefile
config.status: creating plugins/bigdft/Makefile
config.status: creating plugins/etsf_io/Makefile
config.status: creating plugins/fox/Makefile
config.status: creating plugins/libxc/Makefile
config.status: creating plugins/linalg/Makefile
config.status: creating plugins/netcdf/Makefile
config.status: creating plugins/wannier90/Makefile
config.status: creating bindings/Makefile
config.status: creating bindings/parser/Makefile
config.status: creating doc/Makefile
config.status: creating tests/Makefile
config.status: creating config.h
config.status: executing depfiles commands
config.status: executing dump-optim commands
config.status: executing script-perms commands
config.status: executing long-lines commands

 ==============================================================================
 === Final remarks                                                          ===
 ==============================================================================


Summary of important options:

  * C compiler      : gnu version 4.4
  * Fortran compiler: gnu version 4.4
  * architecture    : unknown unknown (64 bits)

  * debugging       : basic
  * optimizations   : standard

  * MPI    enabled  : yes
  * MPI-IO enabled  : yes
  * GPU    enabled  : no (none)

  * TRIO   flavor = netcdf+etsf_io-fallback
  * TIMER  flavor = abinit (libs: ignored)
  * LINALG flavor = netlib-fallback (libs: internal)
  * FFT    flavor = none (libs: ignored)
  * MATH   flavor = none (libs: ignored)
  * DFT    flavor = libxc-fallback+atompaw-fallback+bigdft-fallback+wannier90-fallback

Configuration complete.
You may now type "make" to build ABINIT.
(or, on a SMP machine, "make mj4", or "make multi multi_nprocs=")

With that complete, we “make” our parallel Abinit install.

user@machine:~/abinit-6.8.1$ make mj4
user@machine:~/abinit-6.8.1$ sudo make install

The above runs successfully (for me) and places abinit in /usr/local/bin (which is already in your path).

To run tests, simply do the following:

user@machine:~/abinit-6.8.1$ cd tests/
user@machine:~/abinit-6.8.1/tests$ ls
user@machine:~/abinit-6.8.1/tests$ make tests_min
user@machine:~/abinit-6.8.1/tests$ 
user@machine:~/abinit-6.8.1/tests$ exit
user@adamant49:~/Programs$ exit

The above may not be an “optimized” installation, but works perfectly well (look for a possible tweak to the above that may effort some optimization).

Finally, running Abinit in parallel with OpenMPI is quite simple.

mpirun -np 4 /usr/local/bin/abinit < RUN.files >& RUN.log &

No -machinefile is needed, as we’re running in SMP mode, and no mpd daemon needs to be started.

Crystal06 (v.1.0.2) And MPICH-1.2.7p1 In Ubuntu Desktop 8.10 (and 9.04, 64- and 32-bit) Using The Intel Fortran Compiler, Version 1.0

Tuesday, May 5th, 2009

Update 19 May 2009 – This tutorial (and all subsequent modifications) are now on a separate page on this website and will not be modified further in this post.  This page is available HERE.  The forever-name PDF version of the tutorial is available here: crystal06_mpich_ubuntu_cluster.pdf

Pre-19 May 2009 – This document, the end of a very long and involved process, is available as a PDF download (for reading and printing ease) here: crystal06_mpich_ubuntu_cluster_V1.pdf

Introduction

According the Crystal06 manual:

The CRYSTAL package performs ab initio calculations of the ground state energy, energy gradient, electronic wave function and properties of periodic systems. Hartree-Fock or Kohn-Sham Hamiltonians (that adopt an Exchange- Correlation potential following the postulates of Density-Functional theory) can be used. Systems periodic in 0 (molecules, 0D), 1 (polymers, 1D), 2 (slabs, 2D), and 3 dimensions (crystals, 3D) are treated on an equal footing. In each case the fundamental approximation made is the expansion of the single particle wave functions (‘Crystalline Orbital’, CO) as a linear combination of Bloch functions (BF) defined in terms of local functions (hereafter indicated as ‘Atomic Orbitals’, AOs).

To the experimental interpretation-based quantum chemist, Crystal06 is an atom-centered basis set density functional theory (DFT) program that employs standard Gaussian-type basis sets and generalized-gradient approximation and hybrid density functionals for the property prediction of molecular and solid-state systems. This makes the program similar in scope to the non-periodic atomic basis set-based quantum chemistry packages Gaussian03 (the solid-state framework is there but still in-process) and GAMESS-US (both of which I assume people performing calculations are familiar with) and the atomic basis set-based (but not Gaussian-type) solid-state DFT program DMol3.

This document describes in detail how to set up an MPI-based cluster, including the compiling of the MPICH code upon which (one of) the parallel version(s) of Crystal06 was built, the installation of the Intel Fortran Compiler (IFC) required for compiling the MPICH (version 1.2.7p1) program, and the setup of the hardware for a Network File System/SSH-based cluster.

This document is written for the non-technical Linux user and contains many additional details and, to the experienced user, obvious steps and clarifications. It is the author’s experience that the typical academic researcher knows far less about Linux, networking, and compiling than the developers of most quantum codes expect of those people downloading (or, in this case, buying) their software. Accordingly, this tutorial is a walk-through that, if there are no significant hardware issues, will account for every minor detail in the process taking you from booting the host machine for the first installation all the way to running an MPI-based Crystal06 calculation.

Those who have ever waited for single-molecule calculations to run to completion can appreciate the complication of now waiting for a calculation of an entire unit cell composed of multiple molecules to run to completion. Such calculations are (currently) only practical when performed in a multi-processor (SMP, Beowulf, etc.) environment, the case for which the documentation for Crystal06 parallel execution is useful only if you already know what you’re doing.

It is hoped that this document makes more practical to the less technical general user what the Crystal06 code makes possible.

About Version 1.0

This tutorial contains considerable description overkill in program installation and a procedure that just works but, perhaps, does not work as well as it will with another iteration. For instance, no attempt is made to speed up NFS or tweak compiler settings in the MPICH build. The Crystal06 code is provided pre-compiled, so there is nothing one can do to it that doesn’t require requests to the developers.

The important Ubuntu-specific step in this document is performed with the package installation program apt-get. This apt-get package installation step installs (1) everything the Intel Fortran Compiler requires, (2) SSH (for use with MPICH), and (3) all of the Network File System (NFS) software. This apt-get installation does not remove ALL warnings from the IFC installation script (as discussed in Step 5), but the warnings you’ll see upon installation are unimportant (unless you want Java installed for some other project). If you continue in your Fortran endeavors (beyond MPICH compilation, that is), you may have to install additional programs via apt-get. For the setup described here, no need.

Version 1 of this document will become Version 2 (or more) as I find ways to optimize the performance of the system or if people send me information relevant to the system setup (and please do).

It is worth noting that very little of this overall procedure is Crystal06-specific. This could just as soon be an MPICH/IFC/Ubuntu instruction set for any MPI-based program installation, with the procedure generating a cluster ready for many Quantum Chemistry and Molecular Dynamics Codes (the procedure has worked just fine for Abinit, GROMACS, and AMBER, one just has to be aware of setting IFC-specific flags in the ./configure process of each). If people find this procedure useful and wish to add their own builds of other programs (apt-get lists, compiler flags, etc.) it would be most beneficial to have a big document full of useful information for general computational chemistry (and not Linux-experienced) audiences.

Demonstration System Setup

To begin, I am assuming that you will be starting with a FRESH installation, meaning either a hard drive is being completely wiped clean of its previous operating system or a new hard drive is being installed and GRUB/LILO is being used to make your machine a multiple-boot machine. It is (currently) beyond the scope of the document to deal with compatibility, proprietary drivers, competing software, and all other things not related to installing MPICH, the IFC, and Crystal06. The procedure (post-Ubuntu install and update) may work just fine anyway, but I’ve not tried this procedure with other operating environments.

Computers

Three Dell Precision WorkStation T5400 8-core (2.8 GHz) boxes with 16 GB RAM each – The cluster will use the hard drive on one of the machines for all HD-related activities, so make sure this disk is as large as you need for temporary files (250 GB seems to be far beyond sufficient for Crystal06 calculations). Of course, you can use virtually any machine that will install Ubuntu (you do, however, want the machines to be similar in capabilities for best results).

Switch

NETGEAR GS116 10/100/1000Mbps ProSafe Gigabit Desktop Switch with Jumbo Frame Support 16 x RJ45 512 Kbytes per port Buffer Memory – Obviously, buy as big a switch as you need for the number of machines you intend to use, but plan on expansion (someday). You do not have to have a gigabit switch for the calculations to work, but you will come to regret not having spent the money in the long run. Jumbo Frame Support is preferred, but this will be part of Version 2 of this document.

Battery Backup

APC Smart-UPS SUA1500 1440VA 980W 8 Outlets UPS – Three 8-core boxes, one power plug for the switch, and one LCD monitor. Plugging anything else into this power backup will cause it to sing the song of overdrawn battery when a 24-CPU calculation is started.

NOTE 1: These machines (the impetus for this document) are being used for the calculation of low-frequency vibrational modes as part of Terahertz simulations. For structure optimizations and normal mode analyses NOT being performed with the INTENS keyword (for the calculation of vibrational mode intensities), RAM quantity is not a significant issue (above 1 GB per CPU, for instance). When intensity calculations are requested on sufficiently large systems (40 or more atoms in the unit cell), an 8-node box will use all 16 GB of onboard RAM (the use of larger SWAP is a more cost-effective, but absolutely not more time-effective, workaround that I’ve not worked on further).

NOTE 2: Related to the Jumbo Frame discussion, the proper Broadcom BCM5754 drivers for the integrated ethernet in the T5400 boxes are not installed by Ubuntu and must be installed in separate steps. The generic ethernet drivers being used cap the Maximum Transmission Unit (MTU) at 1500 bytes, whereas it should be possible to boost this number to 9000. It is a straightforward check to determine if a good driver is installed by Ubuntu and if you can boost the MTU to 9000 in the /etc/network/interfaces file (Step 8 and 18). Again, not necessary for the calculations to work, but a potential source of speed-up.

Why Ubuntu?

The author has had great success with it. Provided you start with a new hard drive (or intend on wiping the machine clean as part of the installation), it is the easiest of the well-known Linux distributions to install (that I am aware of). As the Linux distribution that is (openly and advertising that it is) trying to be as user/community-friendly as possible, it is often very easy to get answers to your questions on Ubuntu forums, right from google, etc., which is important if you’re new to Linux.

Everything in this document beyond the installation, initial system update, and MPICH/IFC/Crystal06 download employs Terminal and pico, so fixes to any problems you may have will likely be general to all Linux distributions.

There is one major issue with Ubuntu that OpenSuse, Fedora, etc. users may or may not have to deal with. The Ubuntu installation CD installs everything (programs, drivers, and libraries) that the contents of the CD needs. If you find yourself installing additional programs that have Ubuntu packages already configured for them (here, using apt-get), the Ubuntu installation process will install all of the additional programs, drivers, and libraries required without you having to know what those additional packages are (a very nice feature). If you’re installing a program (or three) that does not have an Ubuntu package configured for it, you are required to do some searching for those missing programs, drivers, and libraries.

This tutorial employs apt-get (run from Terminal) for program/driver/library package installation. The extensive apt-get list installed after the Ubuntu installation is predominantly for satisfying IFC library requirements, with NFS and SSH installed for MPICH and network support. The IFC-specific list was generated by successive error-and-install steps, with successive re-checks of IFC system dependencies required and additional apt-get operations performed before IFC would install without critical errors. I suspect some non-computational chemists have found this tutorial simply because of the presented list of required IFC-specific apt-get installs.

Why MPICH-1.2.7p1?

Why not MPICH2, OpenMPI, or some other version of MPICH as the Message Passing Interface (MPI) of choice? The specific use of MPICH-1.2.7p1 is because the pre-compiled Crystal06 parallel (Pcrystal) binary used for this document was, according to the Crystal06 website, built with the Intel Fortran Compiler and MPICH-1.2.7p1 and all efforts by the author to use other compiler/MPI combinations failed miserably. I attempted to compile MPICH2 for otherwise identical use (for reasons related to message size errors in very memory-intensive INTENS calculations) but after many, many trials, recompilations, and successful non-Crystal06 MPI test runs, I could never get Pcrystal to run 1 calculation on 16 nodes, only 16 separate serial calculations that would all write to stdout. I’ve, therefore, come to the conclusion that, despite MPICH2 being MPI-1 compatible, there is something to the way Pcrystal was compiled that requires it to be run with MPICH. If you’ve compiled OpenMPI or MPICH2 for use in Crystal06 calculations and had it work successfully (and not the Itanium version of Crystal06, which was compiled with MPICH2), please consider providing your configure parameters for addition to this document.

NOTE 1: As an IFC build of MPICH is required for Crystal06 to work, the MPICH version one can install via apt-get is NOT an option. Pcrystal is expecting IFC-built libraries, not GNU-based compiler-built libraries.

NOTE 2: If you’re building an SMP-only Crystal06 system (that is, using only the processors available on the motherboard), then a GNU build of MPICH works just fine.

Why Use The Intel Fortran Compiler?

Ignoring the many details of Fortran compilation, libraries, etc., the version of Crystal06 being used here was built with the IFC and the parallel version (Pcrystal) is expecting to see an MPICH compiled with the IFC. That’s as good a reason as any.

To the best of my knowledge (and after testing), you cannot build MPICH with GNU Fortran compilers and have MPI-Crystal06 calculations work properly. My choice of the IFC over the Portland Group Compiler is purely out of cost-effectiveness. The IFC is available for free to academics (non-commercial license agreement) and there’s a Crystal06 version compiled with it. The Portland Group Compiler, for which Crystal06 builds are also available, is available at an academic discount. If I were not an academic with access to the free Intel version, I would have thought harder about which to purchase/use.

And so it begins…

Step 0 HOST and GUESTs: Things To Be Aware Of In The Overall Procedure

Machine Names And IP Addresses

The cluster documented here will have all machines on a single switch (see picture). For setup purposes, each machine will have an IP address (set in /etc/network/interfaces) and a machine name (set in /etc/hostname). The IP addresses will be the olde reliable 10.1.1.NN, where NN is any number between 1 and 254 (you may see “broadcast” set to 10.1.1.255 in some google searches for /etc/network/interfaces). We will be defining IP addresses in the /etc/network/interfaces steps on HOST and GUESTs below. I’m explaining the 10.1.1.NN IP scheme here to not explain it later. I will be naming the machines crystalhost for the HOST machine and crystalguestN for the GUEST machines, with N being 1 to whatever (I assume no more than 254, of course). These names will be assigned/aliased in the hostnames file (performed in a procedure step. If you’re coming from a Windows and OSX file/directory naming mindset, make sure you avoid spaces and crazy characters in the file/folder names).

NFS-Mounted Directory Name

This is a Crystal06 installation, so I will be using the name /crystal06 for the folder being mounted on all of the GUEST machines via NFS from the HOST machine. Obviously, name this folder whatever you want (avoiding spaces and crazy characters in the name) and propagate the name changes in this document accordingly.

pico

pico is the text editor being used to modify various text files in this document. It is installed as part of the Ubuntu CD installation, so it should be on your machine. I’m using pico because there’s no need to do all of the file modifications via the GUI and I did not grow up with vi. When file changes are performed in pico, you can save and exit by Crtl-X and hitting “Enter” twice.

Terminal

To a windows user, a Terminal window is one that will not close. To a Linux and OSX user, Terminal windows are your ASCII interfaces to the contents of your hard drive. You should become familiar with moving around the Terminal window if you intend on doing any significant computational chemistry, as you will constantly be moving files, deleting files, renaming files, compressing files, and starting (and canceling) calculations.

64-bit And 32-bit

I have introduced a mild complication to this installation procedure by installing the 64-bit version of Ubuntu Desktop on the cluster machines. The complication is due to Crystal06 being compiled 32-bit and 32-bit programs will not run in a 64-bit OS unless several 32-bit libraries are also installed. We will install these files in the apt-get steps for both HOST and GUESTs below (and these libraries NEED to be on all machines for Crystal06 to work). If you installed/want to install the 32-bit version of Ubuntu Desktop, you can still use the same apt-get package lists to install all of the other missing programs (you will simply get messages saying “library already installed.”). There’s no point in pruning this list down for 32- and 64-bit users. You will, however, need to make sure to install the 32-bit IFC (not the 64-bit I’ll be installing here) if you install the 32-bit Ubuntu Desktop version.

sudo

The one aspect of Ubuntu that differs from most other Linux distributions is the differentiation between root, Administrator, and user right from the installation. Whereas you set up the root user in Suse and Fedora as part of the installation process (and set up the users separately), you set up an Administrator account during Ubuntu installation that is distinct from root (with many, but not all, root privileges). As a result, if you do not set up the root account to perform installations and system-level modifications, you are left in the Administrator account to use the sudo (super-user do…) command to allow you, the Administrator, to build and install programs outside of your home ($HOME) directory.

“Do I have to constantly sudo everything?” No. Accessing a pure “root” terminal for installations is straightforward after the root password is assigned, it is simply argued by many (including the Ubuntu wiki) that it is safer to use sudo. If you want to go the root route, check out help.ubuntu.com/community/RootSudo. You do not need to use sudo to make changes to files in your user directory, only to make modifications to system files (which we will be doing in this document).

A@B:

When staring at a Terminal window, the prompt will look like A@B:, with A being the username and B being the machine name. I use this A@B convention in the document and you do NOT type these as they show up (type only the text that follows as marked up in this document).

[TAB]

For the absolute Linux newbie, you can complete long file names automatically by hitting the TAB key. Of course, this only works if you’ve only one of the file in your directory (for l_cprof_p_11.0.083_intel64.tgz in a directory containing other files that start with “l”, you could simply type the l_ and [TAB] to complete the whole file. Works for directories, too).

Step 1 HOST: Ubuntu Installation

Steps in this document are based on my installing the 64-bit Desktop version of Ubuntu. I mention this because the Crystal06 version I’ll be using for calculations was compiled 32-bit. The program still runs just fine, but ONLY IF 32-bit libraries are installed after the CD installation. These libraries must reside on both the HOST and GUEST machines. They are also required/strongly recommended for the IFC installation on the HOST machine. The setup below will only install the IFC on the HOST machine for the MPICH build (and we’ll use the NFS mount and PATH specification to make MPICH accessible to all machines). When you go to run Pcrystal from/on a GUEST node (there are some webpages recommending that you never run an MPI executable from the HOST machine, instead logging into a GUEST machine to perform calculations. I’ve found that it makes no difference) and do not have the 32-bit libraries installed, you’ll get the common “compiled 32-bit but running on a 64-bit OS” error:

“Pcrystal: No such file or directory.”

Ubuntu automates practically all of the installation. For simplicity, set up the same administrative/default username and password on all machines. Do not bother performing a custom partition and making a “crystal06″ partition (the place where these runs will be performed on the host machine). We’ll be making a “work” directory after installation, so just run with the default settings.

INSTALL NOTE: I have had random trouble with onboard video drivers on a number of Dell models during installation. I find the quality of the graphical interface to be far less important than the speed of the computation, so consider specifying “Safe Graphics Mode” (F4 at the install screen after you specify your language) to avoid resolution issues during and after installation.

Step 2 HOST: Post-Installation System Update

Unless you plan on transferring Ubuntu packages, the Intel Fortran compiler, the MPICH source, and the Crystal06 binaries by flashdrive or CD/DVD, a working network interface is required for this step (and, of course, you will have to have a working ethernet port for the cluster anyway, so I assume that your network card works). You probably do not need to update the installation, but you need the network connection working for apt-get anyway, so you might as well do a full system update. If you installed 8.10, I wouldn’t bother upgrading to 9.04 at this time (just as an aside, Ubuntu 9.04 does work with the procedure).

If you’re installing at a campus, you (1) may have to register your machine in the BOOTP table (see your computing services website for more information), (2) “force” an IP address from those available on your subnet (your computing people will hate you for it… if they catch you doing it), (3) may have a router and can simply “plug in” to get online. After the update, apt-get, and software downloads, you will not need to be “online” again (we’ll be hard-coding IP addresses for this switch-friendly cluster). In theory, you could do all of this without ever being online (provided you have a machine online for downloading purposes).

To perform a System Update from the Desktop, go to System -> Administration –> System Update. This may/will take some time and a restart will be required (in theory, you can continue with the steps below and reboot after all changes are made). I recommend performing the apt-get procedure in the next step and THEN rebooting (so that all web-related installations are complete).

Step 3 HOST: apt-get Installations Required For SSH, NFS, and IFC

I assume the network still works from Step 2. Open a Terminal Window (Applications -> Accessories -> Terminal) and type (or copy + paste) the following:

A@B:~$ sudo apt-get install ia32-libs lib32asound2 lib32ncurses5 lib32nss-mdns lib32z1 lib32gfortran3 gcc-4.3-multilib gcc-multilib lib32gomp1 libc6-dev-i386 lib32mudflap0 lib32gcc1 lib32gcc1-dbg lib32stdc++6 lib32stdc++6-4.3-dbg libc6-i386 libstdc++5 g++ g++-4.3 libstdc++6-4.3-dev g++-multilib g++-4.3-multilib gcc-4.3-doc libstdc++6-4.3-dbg libstdc++6-4.3-doc nfs-common nfs-kernel-server portmap ssh

Note the many 32-bit libraries. These are required by both IFC and Pcrystal (making the 64-bit installation I used for this document back-compatible (in a way) with the 32-bit Pcrystal build). If you’re working with a 32-bit Ubuntu installation, you can still just copy + paste this apt-get command to install missing programs and libraries. apt-get will skip libraries already present and only install those things missing from the machine. Admittedly, I’ve not done this on a 32-bit installation so do not know if, in fact, some of these libraries are NOT installed by default as part of the 32-bit installation process. That is, you may get fewer warnings about already-present libraries than I expect you to.

Step 4 HOST: Software Downloads

We will be setting up the cluster in such a way that all of the programs below will only need to be installed on the HOST machine, with NFS used to mount a HOST machine directory on all of the GUEST machines (who will then see MPICH and PCrystal when we set the GUEST PATHs). If you’re on the Desktop of a fresh install, the Firefox icon should be obvious in the upper left (or, if your PDF viewer shows them, simply click on the links in this document to open the pages). For organizational purposes, simply download all of these files to the Desktop (if you’re using the Server Edition, I assume you know your way around your $HOME directory and know where to make changes below) and I will assume these files reside on the Desktop in all commands typed in following steps.

1. MPICH-1.2.7p1 - www.mcs.anl.gov/research/projects/mpi/mpich1/download.html

2. Intel Fortran Compiler - software.intel.com/en-us/articles/non-commercial-software-development/

This will take you to the Agreement Page. After you hit “accept,” look for (under “Compilers”):

Intel® Fortran Compiler Professional Edition for Linux

Fill out the Noncommercial Product Request information on the next screen, then simply follow the instructions in the email (you will need the license number in the email during the installation). Currently, the file to download is l_cprof_p_11.0.083_intel64.tar.gz (version number will change). If you’re using a 32-bit Ubuntu Desktop installation disk, download the 32-bit version.

3. Crystal06 - www.crystal.unito.it

You will need a username/password from the Crystal developers to download the Crystal binaries. I assume you’ve already taken care of this step. For this document, we’ll be using the version of Pcrystal found in crystal06_v1_0_2_Linux-ifort_pentium4_mpich_1.2.7.fedora5_2.6.tar.gz.

Step 5 HOST: Installing The Intel Fortran Compiler

The installation of all of the libraries and programs in the apt-get step above makes the Intel installation much easier (and, for that matter, possible). As this is a 64-bit OS I’ve installed, well be installing the Intel-64 version of the Fortran compiler (currently l_cprof_p_11.0.083_intel64, the one I assume is now residing in .tar.gz format on your Desktop).

At a Terminal Window:

A@B:~$ cd ~/Desktop
A@B:~$ gunzip l_cprof_p_11.0.083_intel64.tgz
A@B:~$ tar xvf l_cprof_p_11.0.083_intel64.tar

[you should now see the folder "l_cprof_p_11.0.083_intel64" on your Desktop]

A@B:~$ cd l_cprof_p_11.0.083_intel64/
A@B:~/l_cprof_p_11.0.083_intel64$ sudo ./install.sh

You will now pass through a series of installation screens. The only thing you will need to have available is the serial number provided in the email sent to you (ABCD-12345678).

If you’ve not installed all of the proper libraries, you’ll see the following screen at Step 4.

Step no: 4 of 7 | Installation configuration – Missing Critical Pre-requisite
——————————————————————————–
There is one or more critical unresolved issue which prevents installation to
continue. You can fix it without exiting from the installation and re-check. Or
you can quit from the installation, fix it and run the installation again.
——————————————————————————–
Missing critical pre-requisite
— missing system commands
— missing system commands
— 32-bit libraries not found
——————————————————————————–
1. Show the detailed info about issue(s) [default]
2. Re-check the pre-requisites

h. Help
b. Back to the previous menu
q. Quit
——————————————————————————–
Please type a selection or press “Enter” to accept default choice [1]:

With a proper apt-get installation, you should see the following at Step 4.

Step no: 4 of 7 | Installation configuration – Missing Optional Pre-requisite
——————————————————————————–
There is one or more optional unresolved issues. It is highly recommended to fix
it all before you continue the installation. You can fix it without exiting from
the installation and re-check. Or you can quit from the installation, fix it and
run the installation again.
——————————————————————————–
Missing optional pre-requisite
— No compatible Java* Runtime Environment (JRE) found
— operating system type is not supported.
— system glibc or kernel version not supported or not detectable
— binutils version not supported or not detectable
——————————————————————————–
1. Skip missing optional pre-requisites [default]
2. Show the detailed info about issue(s)
3. Re-check the pre-requisites

h. Help
b. Back to the previous menu
q. Quit
——————————————————————————–
Please type a selection or press “Enter” to accept default choice [1]:

Note the word “optional.” As for additional installations to remove these errors, Java is installed easily enough via apt-get (although we will not be using it for Crystal06, so it is not necessary). If you check the “operating system type” error, you will see that the IFC installer currently recognizes the following OS’s:

Asianux* 3.0
Debian* 4.0
Fedora* 9
Red Hat Enterprise Linux* 3, 4, 5
SGI ProPack* 5 (IA-64 and Intel(R) 64 only)
SuSE Linux* Enterprise Server* 9, 10
Turbo Linux* 11
Ubuntu* 8.04 (IA-32 and Intel(R) 64 only)

I assume the developers wrote in simple system checks for directory structure or the like and it will be updated as time goes on. The warning is completely irrelevant for our purposes. I suspect the installation is looking for Kernel 2.4.XX (this version of Ubuntu will install 2.6), but that’s also not a problem. The binutils are installed with the original Ubuntu install, so I assume this is just another version conflict (and not important).

The IFC will be installed in /opt/intel/Compiler/11.0/083, which we will add to the PATH on the HOST machine in a later step for MPICH compilation.

Step 6 HOST: Make The Crystal06 Directory

This directory will be THE directory that all of the cluster work reads to and writes from. I’ve now had two people mention that NFS is probably not the fastest and most efficient way to go, but it is the most common way for MPI systems to be set up according to my google research. Hopefully, this means you’ll find websites addressing specific problems with your installation if you’re attempting this installation with other Linux distributions (or if you have a problem with this procedure).

We’ll be placing the work directory right at the base of the directory tree (/). Accordingly, you’ll need Administrative access to do this (hence the sudo). The three steps below make the directory, change the group permissions for the directory, and lets all users read + write + execute content in this directory.

A@B:~$ sudo mkdir /crystal06
A@B:~$ sudo chgrp –R users /crystal06
A@B:~$ sudo chmod a+wrx /crystal06

Step 7 HOST: PATH Statement

The PATH specification adds the IFC directory, the work directory (/crystal06), and the MPICH executable directory (/crystal06/mpich-1.2.7/bin) to the user path (where Ubuntu looks when you type the name of an executable into the Terminal) on the HOST machine. Note that one of these directories does not yet exist and the /crystal06 directory is currently empty. This does not affect the specification of the PATH statement (and this directory will be occupied soon enough).

A@B:~$ cd $HOME
A@B:~$ pico .profile

[At the bottom of the .profile file, add the following:]

PATH=”/opt/intel/Compiler/11.0/083/bin/intel64:/crystal06/mpich-1.2.7/bin:/crystal06:$PATH”

[Ctrl-X to exit (and Enter twice to save)]

A@B:~$ source .profile

Step 8 HOST: Network Interface Setup And Machine Name

This step assigns the IP address to the HOST machine, names the HOST machine crystalhost, and adds the list of all machine names in the cluster. We begin with the IP address:

A@B:~$ sudo pico /etc/network/interfaces

[Ignoring explanations, make your interfaces file look like the following]

/etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.1.1.1
netmask 255.255.255.0
broadcast 10.1.1.255
network 10.1.1.0

[Ctrl-X to exit (and Enter twice to save)]

Now, we change the name of this HOST machine by modifying /etc/hostname

A@B:~$ sudo pico /etc/hostname

[Change the name to crystalhost by simply replacing whatever is there with the following]

crystalhost

[Ctrl-X to exit (and Enter twice to save)]

Finally, we add the IP addresses and machine names for the rest of the machines in the cluster.

A@B:~$ sudo pico /etc/hosts

[Make your hosts file look like this (adding lines for each GUEST machine)]

127.0.0.1 localhost
127.0.1.1 crystalhost
10.1.1.1 crystalhost
10.1.1.2 crystalguest1
10.1.1.3 crystalguest2

10.1.1.N crystalguestN

[This may or may not be at the bottom of the hosts file and does not require modification]

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

[Ctrl-X to exit (and Enter twice to save)]

We will propagate all of these changes by a reboot after all of the HOST machine setup is complete.

Step 9 HOST: Network File System Setup

The NFS installation will open the HOST machine to directory mounting by all of the GUEST machines. We will be making the /crystal06 directory permanently accessible to other machines (even after reboot, crashes, etc.) by adding this directory and a few important NFS-specific flags to /etc/exports. The apt-get installation of the NFS server did the vast majority of the dirty work.

First, we configure portmap. When you run the dpkg-reconfigure command, you’ll be taken to a color configure screen. When it asks you if 127.0.0.1 should be assigned to LOOPBACK, say NO.

A@B:~$ sudo dpkg-reconfigure portmap

[Select NO when asked (navigate with the TAB key)]

A@B:~$ sudo /etc/init.d/portmap restart
A@B:~$ sudo pico /etc/exports

[This file should look something like the following]

# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync) hostname2(ro,sync)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt)
# /srv/nfs4/homes gss/krb5i(rw,sync)
#

[add the following line to the bottom of this file]

/crystal06 *(rw,sync,no_subtree_check)

[Ctrl-X to exit (and Enter twice to save)]

A@B:~$ sudo /etc/init.d/nfs-kernel-server restart
A@B:~$ sudo exportfs -a

Step 10 HOST: Building MPICH-1.2.7p1

This should be the most complicated-looking step in this document, but should run smoothly with everything above accounted for.

A@B:~$ cd ~/Desktop
A@B:~$ gunzip mpich-1.2.7p1.tar.gz
A@B:~$ tar xvf mpich-1.2.7p1.tar

[You will have the directory mpich-1.2.7p1 on your Desktop]

A@B:~$ cd mpich-1.2.7p1/
A@B:~/mpich-1.2.7p1$ ./configure –with-device=ch_p4 –with-arch=LINUX –prefix=/crystal06/mpich-1.2.7 –with-romio=–with-file-system=nfs –enable-f90modules -fc=ifort -f90=ifort

[Considerable output should appear on the screen. Note that the --prefix flag will install mpich-1.2.7 into /crystal06. The –fc and –f90 flags direct the ./configure script to use the IFC]

A@B:~/mpich-1.2.7p1$ make

[More considerable output should appear on the screen]

A@B:~/mpich-1.2.7p1$ sudo make install

[This step will generate /crystal06/mpich-1.2.7]

The output of the make install step should be as follows:

if [ "/crystal06/mpich-1.2.7" = "/crystal06/mpich-1.2.7" ] ; then \
./bin/mpiinstall ; \
else \
./bin/mpiinstall -prefix=/crystal06/mpich-1.2.7 ; \
fi
Installing documentation …
Done installing documentation
Installing manuals
Done installing manuals
Installing MPE
Copying MPE include files to /crystal06/mpich-1.2.7/include
Copying MPE libraries to /crystal06/mpich-1.2.7/lib
Copying MPE utility programs to /crystal06/mpich-1.2.7/bin
About to run installation test for C programs…

** Testing if C application can be linked with logging library
/crystal06/mpich-1.2.7/bin/mpicc -DMPI_LINUX -DUSE_STDARG -DHAVE_PROTOTYPES -I/crystal06/mpich-1.2.7/include -c cpi.c
/crystal06/mpich-1.2.7/bin/mpicc -DMPI_LINUX -DUSE_STDARG -DHAVE_PROTOTYPES -o cpi_log cpi.o -L/crystal06/mpich-1.2.7/lib -llmpe -lmpe -lm
** C application can be linked with logging library

** Testing if C application can be linked with tracing library
/crystal06/mpich-1.2.7/bin/mpicc -DMPI_LINUX -DUSE_STDARG -DHAVE_PROTOTYPES -o cpi_trace cpi.o -L/crystal06/mpich-1.2.7/lib -ltmpe -lm
** C application can be linked with tracing library

** Testing if C application can use both automatic and manual logging together
/crystal06/mpich-1.2.7/bin/mpicc -DMPI_LINUX -DUSE_STDARG -DHAVE_PROTOTYPES -I/crystal06/mpich-1.2.7/include -c cpilog.c
/crystal06/mpich-1.2.7/bin/mpicc -DMPI_LINUX -DUSE_STDARG -DHAVE_PROTOTYPES -o cpilog cpilog.o -L/crystal06/mpich-1.2.7/lib -llmpe -lmpe -lm
** C application can use both automatic and manual logging together

About to run installation test for Fortran programs…

** Testing if Fortran77 application can be linked with logging library
/crystal06/mpich-1.2.7/bin/mpif77 -I/crystal06/mpich-1.2.7/include -c fpi.f
eval: 1: ifort: not found
make[2]: *** [fpi.o] Error 127
** Fortran77 application CANNOT be linked with logging library

Copying SLOG2SDK’s lib
Copying SLOG2SDK’s doc
Copying SLOG2SDK’s logfiles
Creating SLOG2SDK’s bin
Installed SLOG2SDK in /crystal06/mpich-1.2.7
/crystal06/mpich-1.2.7/sbin/mpiuninstall may be used to remove the installation
creating Makefile
About to run installation test…
/crystal06/mpich-1.2.7/bin/mpicc -c cpi.c
/crystal06/mpich-1.2.7/bin/mpicc -o cpi cpi.o -lm
/crystal06/mpich-1.2.7/bin/mpicc -c simpleio.c
/crystal06/mpich-1.2.7/bin/mpicc -o simpleio simpleio.o
/crystal06/mpich-1.2.7/bin/mpif77 -c pi3.f
eval: 1: ifort: not found
make[2]: *** [pi3.o] Error 127
/crystal06/mpich-1.2.7/bin/mpif77 -c pi3p.f
eval: 1: ifort: not found
make[2]: *** [pi3p.o] Error 127
make[1]: *** [all] Error 2
rm -f *.o *~ PI* cpi pi3 simpleio hello++ pi3f90 cpilog
rm -rf SunWS_cache ii_files pi3f90.f pi3p cpip *.ti *.ii
installed MPICH in /crystal06/mpich-1.2.7
/crystal06/mpich-1.2.7/sbin/mpiuninstall may be used to remove the installation.

You will note several warnings when you do this that are annoying but not important. The reason for these errors is because the IFC directory (ifort) is in your user PATH, but not in the root PATH (so, when you sudo, Ubuntu doesn’t know where ifort is). You can remedy this by repeating Step 7 above but doing so in the /root directory.

A@B:~$ cd /root
A@B:~$ sudo pico .profile

[At the bottom of the .profile file, add the following:]

PATH=”/opt/intel/Compiler/11.0/083/bin/intel64:/crystal06/mpich-1.2.7/bin:/crystal06:$PATH”

[Ctrl-X to exit (and Enter twice to save)]

A@B:~$ sudo source .profile

You can check that the PATH and the MPICH compilation are correct by typing “mpirun” at the prompt. You should receive the message:

Missing: program name

This is an mpirun-can’t-find-the-program-you-want-to-run error and means that the PATH is right and mpirun at least starts properly.

Step 11 HOST: machines.LINUX

This file provides the list of machines that MPICH has accessible to it when performing a calculation (effectively, an address book of machines to call upon as processors are requested). We will refer to these machines by the names we used in the hostname file. This file will reside in /crystal06, the same place as the Pcrystal executable. When running the calculation with mpirun, you call this file with the -machinefile flag.

Now, a potential problem. The format for the machines.LINUX file is recommended to be:

machinename:number of CPUs on the machine

So, for the 8-core HOST machine and the two 8-core GUEST machines, the lines would be:

crystalhost:8
crystalguest1:8
crystalguest2:8

When I do it this way, I have the strange issue of having more than 8 Pcrystal processes running on a single machine on occasion, as if MPICH doesn’t know that it is supposed to run one process per processor. This does not occur continuously, but having 9 or 10 processes running on a single box is not only bad for speed, but also bad for overworking processors. My solution to this problem is to simply have one-line-per-processor in the machines.LINUX file so that each processor exists by machine name in the file. MPICH seems to hold to never sending more than one process per processor this way. I’ve not diagnosed this problem further and likely will not without someone else instigating another check.

To generate the machines.LINUX file…

A@B:~$ pico /crystal06/machines.LINUX

[In this new file, add the following (these are for 8-core boxes. Obviously, add one hostname for each processor on the box). You can mix-and-match machines if you like (single-core, dual, quad, etc.), just make sure you've one hostname line per processor.]

crystalhost
.. 6 more ..
crystalhost
crystalguest1
.. 6 more ..
crystalguest1
crystalguest2
.. 6 more ..
crystalguest2

crystalguestNN
.. 6 more ..
crystalguestNN

Step 12 HOST: Crystal06 Selection And Setup

One final setup step before we move to the GUEST machines and, when the cluster is all together, set up SSH. The Crystal06 download in question is Linux-ifort_pentium4_mpich_1.2.7.fedora5_2.6.tar.gz, which you’ll have downloaded from the Crystal06 website (and, I assume, saved to the Desktop).

A@B:~$ cd ~/Desktop
A@B:~$ gunzip crystal06_v1_0_2_Linux-ifort_pentium4_mpich_1.2.7.fedora5_2.6.tar.gz
A@B:~$ tar xvf crystal06_v1_0_2_Linux-ifort_pentium4_mpich_1.2.7.fedora5_2.6.tar

[You should now have the directory [bin] on your Desktop]

A@B:~$ cd bin/crystal06_v1_0_2_Linux-ifort_pentium4_mpich_1.2.7.fedora5_2.6/v1_0_2/
A@B:~/bin/Linux-ifort_pentium4_mpich_1.2.7.fedora5_2.6/v1_0_2$ ls

[You should see crystal, Pcrystal, and properties. We will copy these to the crystal06/ folder]

A@B:~/bin/Linux-ifort_pentium4_mpich_1.2.7.fedora5_2.6/v1_0_2$ cp * /crystal06/

That’s the last step for the HOST machine until the GUEST machines are ready. Now, reboot the HOST machine, plug the ethernet cable into the switch, and bring on the GUEST machines. You will follow the series below for ALL machines, with the only difference at each machine being the IP address and hostname specification (I’ll make sure to point out).

Step 13 GUEST: Installation And System Update

See Step 1 HOST and Step 2 HOST. Exactly the same.

Step 14 GUEST: apt-get Installations Required For SSH, NFS, And 32-bit Pcrystal

Open a Terminal Window (Applications -> Accessories -> Terminal) and type (or copy + paste) the following:

A@B:~$ sudo apt-get install ia32-libs lib32asound2 lib32ncurses5 lib32nss-mdns lib32z1 lib32gfortran3 gcc-4.3-multilib gcc-multilib lib32gomp1 libc6-dev-i386 lib32mudflap0 lib32gcc1 lib32gcc1-dbg lib32stdc++6 lib32stdc++6-4.3-dbg libc6-i386 libstdc++5 g++ g++-4.3 libstdc++6-4.3-dev g++-multilib g++-4.3-multilib gcc-4.3-doc libstdc++6-4.3-dbg libstdc++6-4.3-doc nfs-common portmap ssh

[REPEAT: Note the many 32-bit libraries. These are required by both IFC and Pcrystal (making the 64-bit installation I used for this document back-compatible (in a way) with the 32-bit Pcrystal build). If you're working with a 32-bit Ubuntu installation, you can still just copy + paste this apt-get command to install missing programs and libraries. apt-get will skip libraries already present and only install those things missing from the machine. Admittedly, I've not done this on a 32-bit installation so do not know if, in fact, some of these libraries are NOT installed by default as part of the 32-bit installation process. That is, you may get fewer warnings about already-present libraries than I expect you to.]

The only difference between the GUEST list and the HOST list is the absence of nfs-server-kernel in the GUEST list.

Step 15 GUEST: Make The Crystal06 Directory

We’ll be placing the work directory right at the base of the directory tree (/) for each GUEST machine. Accordingly, you’ll need Administrative access to do this (hence the sudo). The three steps below make the directory, change the group permissions for the directory, and lets all users read + write + execute content in this directory.

A@B:~$ sudo mkdir /crystal06
A@B:~$ sudo chgrp -R users /crystal06
A@B:~$ sudo chmod a+wrx /crystal06

We will be mounting the HOST machine /crystal06 folder in each GUEST machine/crystal06 folder.

Step 16 GUEST: PATH Statement

The PATH specification adds the work directory (/crystal06) and the MPICH executable directory (/crystal06/mpich-1.2.7/bin). These folders are now populated on the HOST machine (these directories do not need to be accessible for you to specify the PATH statement).

A@B:~$ cd $HOME
A@B:~$ pico .profile

[At the bottom of the .profile file, add the following:]

PATH=”/crystal06/mpich-1.2.7/bin:/crystal06:$PATH”

[Ctrl-X to exit (and Enter twice to save)]

A@B:~$ source .profile

Step 17 GUEST: Network Interface Setup And Machine Name

This step assigns the IP address to each GUEST machine, names the GUEST machine crystalguestN, and adds the list of all machine names in the cluster. We begin with the IP address:

A@B:~$ sudo pico /etc/network/interfaces

[Ignoring explanations, make your interfaces file look like the following]

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.1.1.N
netmask 255.255.255.0
broadcast 10.1.1.255
network 10.1.1.0

[Ctrl-X to exit (and Enter twice to save)]

Now, we change the name of this GUEST machine by modifying /etc/hostname

A@B:~$ sudo pico /etc/hostname

[Change the name to crystalguestN by simply replacing whatever is there with the following]

crystalguestN

[Ctrl-X to exit (and Enter twice to save)]

Finally, we add the IP addresses and machine names for the rest of the machines in the cluster.

A@B:~$ sudo pico /etc/hosts

[saving the explanations for other websites, make your hosts file look like this (adding lines for each machine). And there is no problem with having the machine you're on in this list. Just make sure to keep the N values straight for 127.0.1.1.]

127.0.0.1 localhost
127.0.1.1 crystalguestN
10.1.1.1 crystalhost
10.1.1.2 crystalguest1
10.1.1.3 crystalguest2

10.1.1.N crystalguestN

[This may or may not be at the bottom of the hosts file and does not require modification]

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

[Ctrl-X to exit (and Enter twice to save)]

We will propagate all of these changes by a reboot after all of the GUEST machine setups are complete.

Step 18 GUEST: NFS Folder Mounting

We do not set up NFS here, as that was done on the HOST machine. With the nfs-common package installed, the underlying code is already ready already for mounting the HOST directory (that we opened to the world in the HOST setup).

There are two ways to do this. The first is the immediate-connect method that we will use first to make sure everything works right. The second method permanently adds the HOST machine /crystal06 folder to the GUEST file system so that this drive will automatically be mounted at reboot (provided the HOST machine is on and plugged into the switch, of course).

A) Immediate-connect

A@B:~$ sudo mount crystalhost:/crystal06 /crystal06
A@B:~$ cd /crystal06/
A@B:~$:/crystal06$ ls

The ls command should list the contents of the folder, which should be Pcrystal, crystal, utils, and the mpich-1.2.7 directory. If you see these, you’re all set!

B) Permanent-connect

This is the final step in the GUEST setup before we reboot the GUEST machines, plug them into the switch, and set up SSH. We will add the HOST /crystal06 folder to /etc/fstab so that this directory is always mounted upon reboot (provided the HOST machine is up, of course).

A@B:~$ sudo pico /etc/fstab

# /etc/fstab: static file system information.
#
#
proc /proc proc defaults 0 0
# /dev/sda1 UUID=1be7d1a8-1098-40be-8ac4-f6f1dfa46c32 / ext3 relatime,errors=remount-ro 0 1
# /dev/sda5 UUID=d5750659-faba-4a0b-b659-ab3c7e9cde9f none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0

[Add the following line to the bottom]

10.1.1.1:/crystal06 /crystal06 nfs rw,noac 0 0

That completes the basic GUEST setup. To propagate all changes, reboot the machines and plug into the switch. For the remaining steps, we will NOT need to log into the GUEST machines directly, instead performing the final setup steps directory from the HOST machine.

Step 19 From HOST: Testing The Cluster

At this point, I assume that all machines are rebooted and plugged in. To check connectivity, open a Terminal window at type the following:

A@B:~$ ping 10.1.1.N (where N is the number of each GUEST machine)

With luck, you’ll see something like the following for each machine:

PING 10.1.1.N (10.1.1.N): 56 data bytes
64 bytes from 10.1.1.N: icmp_seq=0 ttl=50 time=0.102 ms
64 bytes from 10.1.1.N: icmp_seq=1 ttl=50 time=0.100 ms

Step 20 From HOST: Setting Up SSH

The Secure SHell network protocol is familiar to any quantum chemist with NCSA time and is the method of choice for cross-cluster communication by MPICH. What we are going to do below is make the login from HOST to each GUEST password-less by placing a generated HOST authentication key on each GUEST (so the GUEST will see the login attempt from HOST, confirm the HOST is the HOST, and use the stored, encrypted copy of the password on the GUEST machine to allow the HOST to log in).

A@B:~$ ssh-keygen -t dsa

[You will see something like the following. Don't bother entering a passphrase.]

Generating public/private dsa key pair.
Enter file in which to save the key (/home/terahertz/.ssh/id_dsa):
Created directory ‘/home/terahertz/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/terahertz/.ssh/id_dsa.
Your public key has been saved in /home/terahertz/.ssh/id_dsa.pub.
The key fingerprint is:
33:8a:1e:14:59:ba:11:b7:7d:75:68:4d:af:09:4e:65 terahertz@terahertz2
The key’s randomart image is:
+–[ DSA 1024]—-+
| . o .+E |
| * o .o+.. |
| = . . ..o .|
| + . o . o |
| o S . o |
| . . . o |
| o . |
| . . |
| . |
+—————–+

A@B:~$ cd ~/.ssh
A@B:~/.ssh$ mv id_dsa.pub authorized_keys2

[The authorized_keys2 file will be copied onto each GUEST machine]

A@B:~/.ssh$ chmod 644 authorized_keys2

[Sets the permissions on this file]

This series of steps is performed on ALL GUEST machines.]

A@B:~/.ssh$ sftp crystalguestN

[As we've not logged into any GUEST machine yet from the HOST, this first login will add the HOST machine to the "list of known hosts" on each GUEST machine. This screen will look like the following:]

The authenticity of host ‘crystalhost (10.1.1.1)’ can’t be established.
RSA key fingerprint is 5a:ab:43:5c:7a:82:1c:c2:30:ab:17:1d:7b:dc:35:5d.
Are you sure you want to continue connecting ? yes
Warning: Permanently added ‘10.1.1.1’ (RSA) to the list of known hosts.
username@10.1.1.1’s password:

[This is the password for the Administrative account on the GUEST machines, and we did use the same username/password combination for all machines.]

And you should now be logged into the GUEST machine.

sftp> cd .ssh
sftp> put authorized_keys2
sftp> bye

Now, when you ssh into each guest machine (ssh crystalguestN), you should automatically be taken to a prompt instead of immediately being prompted to provide a password. Try it.

And, in theory, we are ready for a Crystal06 calculation.

Step 21: Executing A Parallel Pcrystal Run

Finally, we attempt a calculation to see it everything is working. For the parallel version, your input file should ONLY be named INPUT. While we would normally direct stdout to a textfile (FILENAME.log), for the moment we’ll simply run Pcrystal and have the output fly across the screen (and use Crtl-C to cancel the calculation when you see it working properly).

Quick Test: From a Terminal Window on the HOST machine:

A@B:~$ cd /crystal06
A@B:~/crystal06$ pico INPUT

[copy + paste the contents of an INPUT file. One is provided below. Crtl-X and Enter twice to Exit.]

A@B:~/crystal06$ mpirun -machinefile machines.LINUX -np NN /crystal06/Pcrystal

[where NN in the number of processors you wish to use (here, 24). With luck, you see output scrolling on your screen.]

Long Run: From a Terminal window on the HOST machine:

A@B:~$ cd /crystal06
A@B:~/crystal06$ pico INPUT

[copy + paste the contents of an INPUT file. One is provided below. Crtl-X and Enter twice to Exit.]

A@B:~/crystal06$ mpirun -machinefile machines.LINUX -np NN /crystal06/Pcrystal >& FILENAME.log &

[where NN in the number of processors you wish to use (here, 24). With luck, you see output scrolling on your screen.]

Step 22: Clean Up Your Work Area

Abbreviations: IFC = Intel Fortran Compiler; NFS = Network File System; SSH = Secure Shell; GUI = Graphical User Interface; GNU = GNU’s Not Unix; MPI = Message Passing Interface; MTU = Maximum Transmission Unit; ASCII = American Standard Code for Information Interchange.

Sample INPUT File

This file should be in the same directory as Pcrystal and must be named INPUT. Simply copy + paste this text into pico.

Test - HMX B3LYP/6-31G(d,p) Optimization
CRYSTAL
0 0 0
14
6.54 11.05 8.7 124.3
14
6     -1.916099596340E-01  6.676349517289E-02 -2.182885523722E-01
6     -2.503804839553E-01 -1.120680815923E-01 -5.474423890830E-02
1     -8.473363602905E-02  2.156436070832E-02 -2.624745229043E-01
1     -3.118388852293E-01  1.333438306584E-01 -3.228685284194E-01
1     -3.403156096494E-01 -1.982400318802E-01 -1.157824844356E-01
1     -2.869510613627E-01 -8.418976395712E-02  4.756872577177E-02
7      4.052599333247E-01  3.611547587878E-03 -2.964084420881E-01
7     -3.482877872090E-01 -2.004714602231E-02 -2.045486619658E-01
7     -1.370629884135E-02  1.239068455614E-01 -3.942995393365E-02
7     -9.996263652745E-02  2.044290396001E-01  3.204467273623E-02
8      3.112065051423E-01  7.742872382639E-02 -4.246789313176E-01
8      2.919005769488E-01 -5.519244772114E-02 -2.454349230033E-01
8      5.311693851955E-02  2.484511082277E-01  1.848582062126E-01
8     -3.250235105003E-01  2.228619176219E-01 -6.094957200050E-02
OPTGEOM
TOLDEG
0.000010
TOLDEX
0.000040
END
END
8 4
0 0 6 2.0 1.0
5484.671700         0.1831100000E-02
825.2349500         0.1395010000E-01
188.0469600         0.6844510000E-01
52.96450000         0.2327143000
16.89757000         0.4701930000
5.799635300         0.3585209000
0 1 3 6.0 1.0
15.53961600        -0.1107775000         0.7087430000E-01
3.599933600        -0.1480263000         0.3397528000
1.013761800          1.130767000         0.7271586000
0 1 1 0.0 1.0
0.2700058000          1.000000000          1.000000000
0 3 1 0.0 1.0
0.800000000          1.00000000
7 4
0 0 6 2.0 1.0
4173.51100         0.183480000E-02
627.457900         0.139950000E-01
142.902100         0.685870000E-01
40.2343300         0.232241000
12.8202100         0.469070000
4.39043700         0.360455000
0 1 3 5.0 1.0
11.6263580        -0.114961000         0.675800000E-01
2.71628000        -0.169118000         0.323907000
0.772218000          1.14585200         0.740895000
0 1 1 0.0 1.0
0.212031300          1.00000000          1.00000000
0 3 1 0.0 1.0
0.800000000          1.00000000
6 4
0 0 6 2.0 1.0
.3047524880D+04   .1834737130D-02
.4573695180D+03   .1403732280D-01
.1039486850D+03   .6884262220D-01
.2921015530D+02   .2321844430D+00
.9286662960D+01   .4679413480D+00
.3163926960D+01   .3623119850D+00
0 1 3 4.0 1.0
.7868272350D+01  -.1193324200D+00   .6899906660D-01
.1881288540D+01  -.1608541520D+00   .3164239610D+00
.5442492580D+00   .1143456440D+01   .7443082910D+00
0 1 1 0.0 1.0
.1687144782D+00   .1000000000D+01   .1000000000D+01
0 3 1 0.0 1.0
.8000000000D+00   .1000000000D+01
1 3
0 0 3 1.0 1.0
.1873113696D+02   .3349460434D-01
.2825394365D+01   .2347269535D+00
.6401216923D+00   .8137573262D+00
0 0 1 0.0 1.0
.1612777588D+00   .1000000000D+01
0 2 1 0.0 1.0
.1100000000D+01   .1000000000D+01
99 0
END
DFT
B3LYP
XLGRID
END
EXCHSIZE
4050207
BIPOSIZE
4050207
TOLINTEG
7 7 7 7 14
MAXCYCLE
100
SCFDIR
TOLDEE
8
SHRINK
4 8
LEVSHIFT
5 1
FMIXING
40
END
END

www.crystal.unito.it
en.wikipedia.org/wiki/Density_functional_theory
www.gaussian.com
www.msg.ameslab.gov/GAMESS
accelrys.com/products/materials-studio/modules/dmol3.html
en.wikipedia.org/wiki/Message_Passing_Interface
www.mcs.anl.gov/research/projects/mpi/mpich1
software.intel.com/en-us/intel-compilers
en.wikipedia.org/wiki/Network_file_system
en.wikipedia.org/wiki/Secure_Shell
en.wikipedia.org/wiki/Symmetric_multiprocessing
en.wikipedia.org/wiki/Beowulf_(computing)
www.ubuntu.com
www.debian.org/doc/manuals/apt-howto
www.java.com/en
www.abinit.org
www.gromacs.org
ambermd.org
www.dell.com/content/products/productdetails.aspx/precn_t5400…
www.netgear.com/Products/Switches/DesktopSwitches/GS116.aspx
en.wikipedia.org/wiki/Jumbo_frame
www.apcc.com/resource/include/techspec_index.cfm?base_sku=SUA1500
en.wikipedia.org/wiki/Maximum_transmission_unit
www.somewhereville.com
www.opensuse.org
fedoraproject.org
help.ubuntu.com/community/UsingTheTerminal
www.mcs.anl.gov/research/projects/mpich2
www.open-mpi.org
www.mcs.anl.gov/research/projects/mpi/mpich1
www.pgroup.com
en.wikipedia.org/wiki/Pico_(text_editor)
en.wikipedia.org/wiki/Vi
en.wikipedia.org/wiki/Sudo
help.ubuntu.com/community/RootSudo
en.wikipedia.org/wiki/Bootstrap_Protocol
www.mozilla.com/firefox
en.wikipedia.org/wiki/Pdf
www.mcs.anl.gov/research/projects/mpi/mpich1/download.html
software.intel.com/en-us/articles/non-commercial-software-development

Building Parallel Abinit 5.6.x With OpenMPI 1.2.x (And NOT OpenMPI 1.3.x) From Sources In Ubuntu 8.x – iofn1.F90 Problem Solved

Wednesday, March 25th, 2009

This post is an update to my previous post on building Abinit with OpenMPI in Ubuntu, with this post providing a workaround (solution?) to a run-benign but ultimately thoroughly aggravating issue with starting calculations in the abinip parallel build.

The description of the procedure, and the problem in the OpenMPI 1.3.x build, is as taken from the previous page (repeated so that the error makes its way and embeds itself a little deeper into the search engines).

To run parallel Abinit on a multi-processor box (that is, SMP.  The actual multi-node cluster setup is in progress), the command is SUPPOSED to be follows:

mpirun -np N /opt/etsf/abinit/5.6/bin/abinip < input.file >& output

Where N is the number of processors.  For mpirun, you need to specify the full path to the executable (which, for the build above, is as Abinit installs abinip when the build occurs in the /opt directory).  The input.file specification is as per the Abinit users manual so I won’t go into it here. You will also be asked to supply your password because I’ve done nothing to the setup of ssh (you are, in effect, logging into your machine to run the MPI calculation).

Now, when the above is run, this is the error that I get:

abinit : nproc,me=           4           0
ABINIT

Give name for formatted input file:
At line 127 of file iofn1.F90 (unit = 5, file = ’stdin’)
Fortran runtime error: End of file
abinit : nproc,me=           4           1
abinit : nproc,me=           4           2
abinit : nproc,me=           4           3
————————————————————————–
mpirun has exited due to process rank 0 with PID 7131 on
node terahertz-desktop exiting without calling “finalize”. This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
————————————————————————–

What is supposed to happen is that the input.file file lists the files that Abinit requires to perform the run and provides these files by name one-at-a-time as Abinit requests them upon start-up.  For some reason, the input.file file is not being read properly or is not being read at all before the job crashes.  Oddities noted in the above order of the output include

(1) the abinit : nproc,me= values are not grouped above the “Give name for formatted input file:” <- Abinit does not appear to be trying to read the text from the nproc,me lines as actual input data, as you have to provide all of the files before Abinit will crash with a wrong file name.

(2) At line 127 of file iofn1.F90 <- this is an Abinit file that is responsible for reading the contents of input.files.  So, is the problem with this fine in Abinit? Well…

(3) The serial build of Abinit (abinis) runs just fine with input.file <- which leads me to conclude that the problem is mpirun-related.  I hope to resolve this (I’m sure it will be trivial) and post my error accordingly.

What’s the work-around?  Simple.  Copy the contents of the input.file file (literally Crtl+C with the text selected) and paste it after running this command:

mpirun -np N /opt/etsf/abinit/5.6/bin/abinip

Abinit will ask for the files in order AND your Crtl+C includes the carriage returns at the end of each line, so you are effectively feeding Abinit the same content it would read from the input.file file if, in fact, it was capable of reading the input.file file.

After considerable searching for NOT the error I was having with Abinit, I discovered the following thread [Post 1, Post 2, Post 3, Post 4, Post 5, Post 6] in the OpenMPI Users Mailing List (see?  Once these lists get populated with enough content, you’re bound to find just about everything).  This doesn’t directly address the problem (the problem is related but different, the origin of the problem is the same, and googling “OpenMPI” and “stdin” was what brought it to my attention).

The solution to the problem above is to build Open-MPI 1.2.x instead of Open-MPI 1.3.x.

NOTE: If, after building Open-MPI 1.2.x, you receive the following error the first time you run mpirun:

mpirun: error while loading shared libraries: libopen-rte.so.0: cannot open shared object file: No such file or directory

Simply type the following:

sudo ldconfig

To make the proper links to libopen-rte and associated libraries.  From the man page

ldconfig creates the necessary links and cache to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories (/lib and /usr/lib). The cache is used by the run-time linker, ld.so or ld-linux.so. ldconfig checks the header and filenames of the libraries it encounters when determining which versions should have their links updated.

The stdin problem, I think, may remain in the entire 1.3.x build series as, and it’s not a great piece of deduction, the fix is reported in the trunk for 1.4.1 (not being big on alpha/beta testing and wanting to get Abinit running more than wanting a final answer to this problem, I did not try installing the trunk build and testing accordingly), although I have no idea how quickly things happen in OpenMPI development.

Long-story-short, Abinit 5.6.5 and OpenMPI 1.2.x works just fine at the READ *.files step and, most importantly, Abinit runs can now be properly scripted to run without my having to be by the machine to copy+paste the contents of the *.files file.

www.somewhereville.com/?p=384
www.abinit.org
www.open-mpi.org
www.ubuntu.com
www.open-mpi.org/software/ompi/v1.3
en.wikipedia.org/wiki/Symmetric_multiprocessing
www.open-mpi.org/community/lists/users/2008/12/7520.php
www.open-mpi.org/community/lists/users/2008/12/7522.php
www.open-mpi.org/community/lists/users/2008/12/7531.php
www.open-mpi.org/community/lists/users/2008/12/7536.php
www.open-mpi.org/community/lists/users/2008/12/7539.php
www.open-mpi.org/community/lists/users/2008/12/7537.php
www.open-mpi.org/community/lists/ompi.php
www.google.com
www.open-mpi.org/software/ompi/v1.2/
linux.die.net/man/8/ldconfig
www.open-mpi.org/nightly/trunk

Amber And Ubuntu Part 2. Amber10 (Parallel Execution) Installation In Ubuntu 8.10 (Intrepid Ibex) With OpenMPI 1.3… And Commentary

Sunday, March 15th, 2009

After considerable trial and building/testing errors, what follows is as simplified a complete installation and (non-X11/QM) testing of Amber10 and OpenMPI 1.3 as I think can be procedure’d in Ubuntu 8.10 (and likely previous and subsequent Ubuntu versions), dealing specifically with assorted issues with root permissions and variable definitions as per the standard procedure for Amber10 installation.

I’ll begin with the short procedure and bare minimum notes, then will address a multitude of specific problems that may (did) arise during all of the build procedures.  The purpose for listing everything, it is hoped, is to make these errors appear in google during searches so that, when you come/came across the errors, your search will have provided some amount of useful feedback (and, for a few of the problems I had with previous builds of other programs, this blog is the ONLY thing that comes up in google).

Some of the content below is an extension of the single-processor build of Amber10 I posted previously.  In the interest of keeping the reading to a minimum, the short procedure below is light on explanations that are provided in the long procedure that follows.  If you’re running on a multi-core computer (and who isn’t anymore), you likely want to take advantage of the MPI capability, so I would advise simply following the procedure below and IGNORING the previous post (as I also state at the top of the previous page).

Enough blabbering.

LEGEND

Text in black – my ramblings.

Text in bold preformatted red - things you will type in the Terminal

Text in green – text you will either see or will type into files (using pico, my preference)

Amber10, OpenMPI 1.3, Ubuntu 8.10: The Easy, Teenage New York Version

0a. I assume you’re working from a fresh installation of Ubuntu 8.10.  Some of what is below is Ubuntu-specific because of the way that Ubuntu decides to deal with the Root/Administrator/User relationship.  That said, much is the same and I’ll try to remark accordingly.

0b. As with the previous post, I am writing this with an expected audience of a non-technical Linux user, which means more detail of the steps and not just “cd $HOME and ./config -> make -> make install.”  If the steps are too obvious to you, find someone who thinks a Terminal window is one that won’t close and let them do it.  Speaking of…

0c. Everything below is done from a Terminal window.  The last icon you will use is the Terminal icon you click on.  If you’ve never had the pleasure in Ubuntu Desktop, go to Applications -> Accessories -> Terminal.  You can save yourself some time dragging around menus by hovering over the Terminal icon, right-clicking, and “Add this launcher to panel.”

0d. One more thing for the absolute newbies.  I am assuming that you’ve used Firefox to download OpenMPI 1.3, the AmberTools 1.2 and Amber10 bz2 files, and the bugfix.all files for AmberTools 1.2 and Amber10 (more on that in 0e).  That’s all you’ll need for the installation below.  Note that I’m not dealing with X11 (library specifications and other testing to do first), MKL (the Intel Math Kernel Library), or the GOTO BLAS libraries.  The default download folder for Firefox in a fresh installation of Ubuntu is the Desktop.  I will assume your files are on the Desktop, which will then be moved around in the installation procedure below.

0e. bugfix.all (Amber10) and bugfix.all (AmberTools 1.2) – Yes, the Amber people provide both of these files for the two different programs using the same name.  As I assume you’re using Firefox to download these files from their respective pages, I recommend that you RENAME the bugfix.all file for AmberTools 1.2 to bugfix_at.all, which is the naming convention I use in the patching step of the installation process (just to keep the confusion down).  In case you’ve not had the pleasure yet, instead of clicking on the bugfix.all link on each page and saving the file that loads, simply right-click and “Save Link As…”, then save the AmberTools 1.2 bugfix.all file as bugfix_at.all.

0f. This OpenMPI 1.3 build and series of Amber10 tests assumes SMP (symmetric multi-processing) only, meaning you run only with the CPUs on your motherboard.  The setup of OpenMPI for cluster-based computing is a little more complicated and is not presented below (but is in process for write-up).

0g. And, further, I am not using the OpenMPI you can install by simply sudo apt-get install libopenmpi1 linopenmpi-dev openmpi-bin openmpi-doc for two reasons.  First, that installs OpenMPI 1.2.8 (I believe), which mangles one of the tests in such a way that I feared for subsequent stability in Amber10 work I might find myself doing.  Second, I want to eventually be able to build OpenMPI around other Fortran compilers (specifically g95 or the Intel Fortran Compiler) or INSERT OPTION X and, therefore, prefer to build from source.

So, from a Terminal window, the mostly comment-free/comment-minimal and almost flawless procedure is as follows.  Blindly assume that everything I am doing, especially in regards to WHERE this build is occurring and WHAT happens in the last few steps, is for a good reason (explained in detail later)…

0. cd $HOME (if you’re not in your $HOME directory already or don’t know where it is)

1. sudo apt-get update (requires administrative password)

2. sudo apt-get install ssh g++ g++-multilib g++-4.3-multilib gcc-4.3-doc libstdc++6-4.3-dbg libstdc++6-4.3-doc flex bison fort77 netcdf-bin gfortran gfortran-multilib gfortran-doc gfortran-4.3-multilib gfortran-4.3-doc libgfortran3-dbg autoconf autoconf2.13 autobook autoconf-archive gnu-standards autoconf-doc libtool gettext patch libblas3gf liblapack3gf libgfortran2 markdown csh (this is my ever-growing list of necessary programs and libraries that are not installed as part of a fresh Ubuntu installation)

3. pico .bashrc

Add the following lines to the bottom of this file.  This will make more sense shortly…

AMBERHOME=$HOME/Documents/amber10/
export AMBERHOME

MPI_HOME=/
export MPI_HOME

Crtl-X and the Enter Key twice to Exit

4. source .bashrc

5. pico. profile

Add the following line to the bottom of this file.

PATH=”$HOME/Documents/amber10/exe:$PATH”

Crtl-X and the Enter Key twice to Exit

6. source .profile

7. mv $HOME/Desktop/Amber* $HOME/Documents/ (this assumes the downloaded files are on the desktop)

8. mv $HOME/Desktop/openmpi-1.3 $HOME/Documents/ (this assumes the downloaded files are on the desktop)

9. cd $HOME/Documents/

10. gunzip openmpi-1.3.tar.gz

11. tar xvf openmpi-1.3.tar

12. cd openmpi-1.3

13. ./configure –prefix=/ (installs the MPI binaries and libraries into /. Avoids library errors I ran across despite MPI_HOME, but may be fixable. Copious output to follow.  For my results, click HERE)

14. sudo make all install (this installs binaries and libraries into “/”. Copious output to follow.  For my results, click HERE)

15. cd $HOME/Documents/

16. tar xvjf Amber10.tar.bz2

17. tar xvjf AmberTools-1.2.tar.bz2

18. mv $HOME/Desktop/bugfix* $HOME/Documents/$AMBERHOME (moves bugfix files to the Amber directory)

19. cd $AMBERHOME

20. patch -p0 -N -r patch-rejects < bugfix_at.all (patches AmberTools)

21. patch -p0 -N -r patch-rejects < bugfix.all (patches Amber10)

22. cd src/

23. ./configure_at -noX11

24. make -f Makefile_at (copious output to follow.  For my results, click HERE)

25. cd ../bin/

26. pico mopac.sh

At the top of the file, change sh to bash

Crtl-X and the Enter Key twice to Exit

27. cd ../test/

28. make -f Makefile_at test (copious output to follow.  For my results, click HERE)

29. cd ../src/

30. ./configure_amber -openmpi gfortran

31. make parallel (copious output to follow.  For my results, click HERE)

32. cd $HOME/.ssh/ (at this step, we allow auto-login in ssh so that the multiple mpirun tests do not require that you supply your password constantly)

33. ssh-keygen -t dsa

34. cat id_dsa.pub >> authorized_keys2

35. chmod 644 authorized_keys2

36. cd $AMBERHOME/test/

37. csh

38. setenv DO_PARALLEL ‘mpirun -np N (here, N is the number of processors you wish to use on your mobo)

39. make test.parallel.MM (copious output to follow.  For my results, click HERE)

40. exit (exits the csh shell)

41. sudo cp -r $HOME/Documents/amber10 /opt/amber10

42. cd $HOME

43. rm -r $HOME/Documents/amber10 (this deletes the build directory.  Delete or keep as you like)

44. pico .bashrc

Make the following change to the bottom of this file.

AMBERHOME=/opt/amber10/

Crtl-X and the Enter Key twice to Exit

45. source .bashrc

46. pico .profile

Make the following change to the bottom of this file.

PATH=”/opt/amber10/exe:$PATH”

Crtl-X and the Enter Key twice to Exit

47. source .profile

That is it!  In theory, you should now have a complete and tested Amber10 and AmberTools 1.2 build sitting in /opt/amber10.

Amber10, OpenMPI 1.3, Ubuntu 8.10: The Notes

What follows is the complete list of problems, questions, errors, issues, and general craziness from much trial and error for the installation procedure above.  As you can guess from my constant mentioning of Ubuntu in my statements-with-qualifications, some of these problems likely will not occur in other distros.  My primary reason for the extended discussion below is so that the errors and issues make their way into google so that people searching for fixes to these problems (if they come across them) will see actual content (if they choose to read it) and maybe find a reasonable fix.

I’ll be expanding in sections by grouping numbers above.

0a – 0g Installation Preparations

I’ve not much to add here except that OpenMPI is likely not the only way to install MPI Amber10 in Ubuntu, but I think it is easier than MPICH2 to set up cross-cluster calculations on a switch’ed network.  My stronger preference for OpenMPI stems from both past positive experience with OpenMPI and GROMACS (specifically on my Macbook Pro) and eventual success with OpenMPI 1.2.X and Abinit 5.6.5.  I had hoped to use the same version of OpenMPI for GROMACS, Amber10, and Abinit, but ran into a yet-to-be-resolved issue with OpenMPI 1.3.x in the building of Abinit (the problem is, apparently, resolved in the upcoming 1.4.1 build, but I’m not much for using release candidates.  I’ll be discussing this in an upcoming Abinit installation post based on my previous Abinit installation post).

1 – 2 apt-get

My listed apt-get installation set contains many, many programs and libraries that are not necessarily needed in the OpenMPI and Amber10 build but are required for other programs.  The apt-get approach is still much cleaner than installing the entire OpenSuse or Fedora DVD, but you do find yourself scrambling the first time you try to install anything to determine what programs are missing.  You don’t know you need csh installed until the test scripts fail.  You forget about ssh until you run mpirun for the first time.  I do not yet know if MKL, GOTO, or any of the X11-based AmberTools programs require additional libraries to be installed, so the above apt-get list may grow.  Check the comments section at bottom for updates.

The list below shows essential programs and libraries AND their suggested additional installs. As long as you’re online and apt-get‘ing anyway, might as well not risk missing something for your next compiling adventure.

g++ g++-multilib  g++-4.3-multilib  gcc-4.3-doc  libstdc++6-4.3-dbg  libstdc++6-4.3-doc

gfortran gfortran-multilib gfortran-doc gfortran-4.3-multilib gfortran-4.3-doc libgfortran3-dbg

autoconf autoconf2.13 autobook autoconf-archive gnu-standards autoconf-doc libtool gettext

flex bison

ssh csh patch markdown fort77 netcdf-bin libblas3gf liblapack3gf libgfortran2

3 – 9 .bashrc and .profile Modifications, Building In Your Own Directory

For a number of programs, the procedure from source is ./configure, make, and make install, make install often only responsible for moving folders into the correct directories (specifically, directories only root has access to, such as /usr/local/bin, /lib, and /opt).  In many distributions, this final step is actually sudo make install.  This division of make and make install is not preserved in Amber, which complicates the Ubuntu build slightly.  The building in $HOME/Documents (as I’ve described the procedure above) saves you from having to constantly sudo the extraction and building process in directories you, the typical user, do not have access to write into.

Working in $HOME\Documents (or any of your $HOME folders) allows for the complete build and testing of AmberTools and Amber10.

The other benefit from doing as much in $HOME as possible is the lack of a need to define variables as the root user (specifically, AMBERHOME, MPI_HOME, and DO_PARALLEL) by setting variables in the root .bashrc and .profile files, adding lines to the Makefiles for AmberTools and Amber10, or setting variables at prompts.  This variable specification issue arises because when you run a program with sudo, you invoke the root privileges and the root variable definitions, so any specifications you make for your PATH or these Amber-specific variables are lost.

Once the build is complete in the $HOME/Documents folder, we move the entire directory into /opt, my default location for all programs I build from source (but $HOME/Documents is just fine as well once the PATH is set).

10 – 14 OpenMPI 1.3

So, why not then use OpenMPI 1.2.x?  The build of Amber10 works just fine with OpenMPI 1.2.x in Ubuntu with all of the installation specifications described in Step 2 of the short procedure (the extensive apt-get).  The problem with 1.2.x occurs for a single test after the Amber10 build that I’ve not yet figured out a workaround for, but the error is sinister enough that I decided to not risk similar errors in my own Amber work and, instead, use OpenMPI 1.3.x, which does not suffer the same error.  The only (ONLY) test to fail with OpenMPI 1.2.x is the cnstph (constant pH simulation) test, which runs perfectly well but fails at the close of the calculation (you can test this yourself by changing the nstlim value in the mdin file to any arbitrarily large number).  The failure message, which kills the test series, is below.  This job also fails if you simply try to run it independently of the pre-defined test set (not the make test.parallel.MM, but cd’ing into the cnstph directory, setting variables, and running the script).

==============================================================
cd cnstph && ./Run.cnstph
[ubuntu-desktop:18263] *** Process received signal ***
[ubuntu-desktop:18263] Signal: Segmentation fault (11)
[ubuntu-desktop:18263] Signal code: Address not mapped (1)
[ubuntu-desktop:18263] Failing at address: 0x9069d2cb0
[ubuntu-desktop:18264] *** Process received signal ***
[ubuntu-desktop:18265] *** Process received signal ***
[ubuntu-desktop:18265] Signal: Segmentation fault (11)
[ubuntu-desktop:18265] Signal code: Address not mapped (1)
[ubuntu-desktop:18265] Failing at address: 0x907031c70
[ubuntu-desktop:18264] Signal: Segmentation fault (11)
[ubuntu-desktop:18264] Signal code: Address not mapped (1)
[ubuntu-desktop:18264] Failing at address: 0x906dbdc00
[ubuntu-desktop:18263] [ 0] /lib/libpthread.so.0 [0x2b17b5bef0f0]
[ubuntu-desktop:18263] [ 1] /usr/local/lib/libopen-pal.so.0(_int_free+0x57) [0x2b17b4c10937]
[ubuntu-desktop:18263] [ 2] /usr/local/lib/libopen-pal.so.0(free+0xeb) [0x2b17b4c122bb]
[ubuntu-desktop:18263] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18264] [ 0] /lib/libpthread.so.0 [0x2abd8f5810f0]
[ubuntu-desktop:18264] [ 1] /usr/local/lib/libopen-pal.so.0(_int_free+0x57) [0x2abd8e5a2937]
[ubuntu-desktop:18264] [ 2] /usr/local/lib/libopen-pal.so.0(free+0xeb) [0x2abd8e5a42bb]
[ubuntu-desktop:18264] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18264] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18264] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18264] [ 6] /lib/libc.so.6(__libc_start_main+0xe6) [0x2abd8f7ad466]
[ubuntu-desktop:18264] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18264] *** End of error message ***
[ubuntu-desktop:18263] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18263] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18263] [ 6] /lib/libc.so.6(__libc_start_main+0xe6) [0x2b17b5e1b466]
[ubuntu-desktop:18263] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18263] *** End of error message ***
[ubuntu-desktop:18265] [ 0] /lib/libpthread.so.0 [0x2ba0032600f0]
[ubuntu-desktop:18265] [ 1] /usr/local/lib/libopen-pal.so.0(_int_free+0x57) [0x2ba002281937]
[ubuntu-desktop:18265] [ 2] /usr/local/lib/libopen-pal.so.0(free+0xeb) [0x2ba0022832bb]
[ubuntu-desktop:18265] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18265] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18265] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18265] [ 6] /lib/libc.so.6(__libc_start_main+0xe6) [0x2ba00348c466]
[ubuntu-desktop:18265] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18265] *** End of error message ***
mpirun noticed that job rank 0 with PID 18262 on node ubuntu-desktop exited on signal 11 (Segmentation fault).
3 additional processes aborted (not shown)
./Run.cnstph:  Program error
make[1]: *** [test.sander.GB] Error 1
make[1]: Leaving directory `/home/userid/Documents/amber10/test’
make: *** [test.sander.GB.MPI] Error 2

Errors like this one scare me to no end, especially when the error seems to be in the proper termination of a process (such as writing final positions or data files) and you risk such errors occurring after 2 week simulations with no way to get your data back.  If you decide (if it’s already installed, for instance) to use OpenMPI 1.2.x with Amber10 but still want to test the build, I’d suggest simply commenting out (#) the line

# cd cnstph && ./Run.cnstph

from the Makefile in the ../test directory.  I can’t imagine this happens in all other distributions, but I also don’t know what the problem could be given that all of the other tests work just fine.  That said, there’s a failed test for the OpenMPI 1.3.x build as well when you forget to run the tests from csh, but that has nothing to do with OpenMPI (see below).

16 – 19 Building Amber10 and AmberTools At $HOME

As described in the 3 – 9 section above, the problem with not building in your $HOME directory is the passing of variables in the build processes.  For instance, if you set MPI_HOME in your $HOME .bashrc file and then sudo make parallel, the error you’ll see is

Starting installation of Amber10 (parallel) at Mon Mar  9 22:30:36 EDT 2009.
cd sander; make parallel
make[1]: Entering directory `/opt/amber10/src/sander’
./checkparconf
cpp -traditional -I/usr/local/include -P -xassembler-with-cpp -Dsecond=ambsecond -DBINTRAJ -DMPI  constants.f > _constants.f
/usr/local/bin/mpif90 -c -O3 -fno-range-check -fno-second-underscore -ffree-form  -o constants.o _constants.f
Cannot open configuration file /usr/share/openmpi/mpif90-wrapper-data.txt
Error parsing data file mpif90: Not found
make[1]: *** [constants.o] Error 243
make[1]: Leaving directory `/opt/amber10/src/sander’
make: *** [parallel] Error 2

because the MPI_HOME variable is not specified for root.  Performing the compilation in $HOME/Documents avoids this issue.

If you want to build Amber and AmberTools in /opt as root for some reason and do not want to deal with modifying the .bashrc and .profile files in /root, you can modify the appropriate Amber files to define the variables needed for both building and testing. For AmberTools testing, you need to define AMBERHOME, which you do at the top of Makefile_at.

cd ../test/

sudo pico Makefile_at

include ../src/config.h

AMBERHOME=/opt/amber10/
export AMBERHOME

test: is_amberhome_defined \

sudo make -f Makefile_at test

For the Amber10 build process, you would need to modify configure_amber at the top of the file to specify the MPI_HOME variable.

sudo pico configure_amber

#!/bin/sh
#set -xv

MPI_HOME=/
export MPI_HOME

command=”$0 $*”

For the Amber10 testing process, you would need to assign both AMBERHOME (so the tests know where to look for the executables) and DO_PARALLEL (so the tests know to use OpenMPI) at the top of the file.

cd ../test/

sudo pico Makefile

include ../src/config_amber.h

AMBERHOME=/opt/amber10
export AMBERHOME

DO_PARALLEL=mpirun -np 4
export DO_PARALLEL

SHELL=/bin/sh

It otherwise makes no difference at all how you choose to do things so long as the program gets built.  Much of the Ubuntu literature I’ve stumbled across attempts to make people avoid doing anything to change the root account settings, which was the approach I chose to use in the “Easy” procedure above.

20 – 21 Patching Amber10 and AmberTools 1.2

No surprises and not necessary for building.  Do it anyway.  And patch is included as one of the apt-get‘ed programs.  Simply be cognizant of the naming of the bugfix files (the directories are the same for both Amber10 and AmberTools and the patch is simply applied to the files it finds).

22 – 24 Building AmberTools

Your choices for the AmberTools build are fairly limited.  Via configure_at –help,

Usage: ./configure_at [flags] compiler

where compiler is one of:

gcc, icc, solaris_cc, irix_cc, osf1_cc

If not specified then gcc is used.

Option flags:

-mpi        use MPI for parallelization
-scalapack  use ScaLAPACK for linear algebra (utilizes MPI)
-openmp     Use OpenMP pragmas for parallelization (icc, solaris_cc,gcc(>4.2))
-opteron    options for solaris/opteron
-ultra2     options for solaris/ultra2
-ultra3     options for solaris/ultra3
-ultra4     options for solaris/ultra4
-bit64      64-bit compilation for solaris
-perflib    Use solaris performance library in lieu of LAPACK and BLAS
-cygwin     modifications for cygwin/windows
-p4         use optimizations specific for the Intel Pentium4 processor
-altix      use optimizations specific for the SGI Altix with icc
-static     create statically linked executables
-noX11      Do not build programs that require X11 libraries, e.g. xleap.
-nobintraj  Delete support for binary (netCDF) trajectory files
-nosleap    Do not build sleap, which requires unantiquated compilers.

Environment variables:
MKL_HOME    If present, will link in Intel’s MKL libraries (icc,gcc)
GOTO        If present, and MKL_HOME is not set, will use this location
for the Goto BLAS routines

We’re building gcc (default) with the above installs.  Running ./configure -noX11 with the apt-get-installed programs above should produce the following output.

Setting AMBERHOME to /home/userid/Documents/amber10

Testing the C compiler:
mpicc  -m64 -o testp testp.c
OK

Obtaining the C++ compiler version:
g++ -v
The version is ../src/configure
4.3.2
[: 520: 3: unexpected operator
OK

Testing the g77 compiler:
g77 -O2 -fno-automatic -finit-local-zero -o testp testp.f
./configure_at: 538: g77: not found
./configure_at: 539: ./testp: not found
Unable to compile a Fortran program using g77 -O2 -fno-automatic -finit-local-zero

Testing the gfortran compiler:
gfortran -O1 -fno-automatic -o testp testp.f
OK

Testing flex:
OK

Configuring netcdf; (may be time-consuming)

NETCDF configure succeeded.

The configuration file, config.h, was successfully created.

The next step is to type ‘make -f Makefile_at’

25 – 28 Testing AmberTools

As a quick head’s up, if you don’t have csh installed, your test will fail at the following step with the following error:

cd ptraj_rmsa && ./Run.rms
/bin/sh: ./Run.rms: not found
make: *** [test.ptraj] Error 127

Otherwise, you can see the output from my test set HERE.  Installing csh is much easier than modifying multiple Run scripts.

That said, the one fix I do perform is to modify the mopac.sh file only slightly in accordance with the post by Mark Williamson on the Vanderbilt University Amber Listserve (the first stumbling block a ran into during testing).

29 – 31 Building Parallel Amber10

Running ./configure_amber -openmpi gfortran should output the following:

Setting AMBERHOME to /home/userid/Documents/amber10

Setting up Amber configuration file for architecture: gfortran
Using parallel communications library: openmpi
The MKL_HOME environment variable is not defined.

Testing the C compiler:
gcc  -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -O2 -m64 -o testp testp.c
OK

Testing the Fortran compiler:
gfortran -O0 -fno-range-check -fno-second-underscore -o testp testp.f
OK

——   Configuring the netCDF libraries:   ——–

Configuring netcdf; (may be time-consuming)
NETCDF configure succeeded.
MPI_HOME is set to /

The configuration file, config_amber.h, was successfully created.

32 – 35 Automatic ssh Login

These step saves you from constantly having to input your password for the mpirun testing phase.  This strictness to password provision is because of ssh (and, because I have machines both online and accessible, I prefer to deal with setting up the ssh side right instead of not having that layer of security).  The first time you run mpirun, ssh will throw back at you the following:

The authenticity of host ‘userid-desktop (127.0.1.1)’ can’t be established.
RSA key fingerprint is eb:86:24:66:67:0a:7a:7b:44:95:a6:83:d2:a8:68:01.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘terahertz-desktop’ (RSA) to the list of known hosts.

Generating the automatic login file for ssh will look like the following:

Generating public/private dsa key pair.
Enter file in which to save the key (/home/userid/.ssh/id_dsa):
Created directory ‘/home/userid/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/userid/.ssh/id_dsa.
Your public key has been saved in /home/userid/.ssh/id_dsa.pub.
The key fingerprint is:
54:84:68:79:3b:ec:17:41:c9:96:98:10:df:4f:cc:42 userid@userid-desktop
The key’s randomart image is:
+–[ DSA 1024]—-+

36 – 40 Testing Amber10

If you’re not in csh, either this test fails…

cd rdc && ./Run.dip
if: Badly formed number.
make[1]: *** [test.sander.BASIC] Error 1
make[1]: Leaving directory `/home/userid/Documents/amber10/test’
make: *** [test.sander.BASIC.MPI] Error 2

or this one…

cd pheMTI && ./Run.lambda0
This test must be run in parallel
make: *** [test.sander.TI] Error 1

Again, re-writing scripts is far less fun than sudo apt-get install csh and forgetting about it.

41 – 47 Moving The Built Amber10 and AmberTools

The final sequence of steps moves the $HOME/Documents-built amber10 into /opt (if you choose to), removes the build from your $HOME directory, and resets your PATH and AMBERHOME variables in .bashrc and .profile, thereby completing the build process.

And Finally…

If questions are raised, comments are thought of, speed-ups identified, etc., please either send me an email or post them here.  Our concern as computational chemists should be making predictions and interpreting data, not making compilation errors and interpreting error messages.

en.wikipedia.org/wiki/X_Window_System
en.wikipedia.org/wiki/Quantum_mechanics
ambermd.org
www.open-mpi.org
www.ubuntu.com
www.google.com
www.somewhereville.com
www.somewhereville.com/?p=345
en.wikipedia.org/wiki/Message_Passing_Interface
en.wikipedia.org/wiki/Zappa_in_New_York
help.ubuntu.com/community/RootSudo
www.linux.org
help.ubuntu.com/community/UsingTheTerminal
www.mozilla.com/en-US/firefox
ambermd.org/AmberTools-get.html
www.bzip.org
www.intel.com/cd/software/products/asmo-na/eng/307757.htm
www.tacc.utexas.edu/resources/software/gotoblasfaq.php
ambermd.org/bugfixes.html
en.wikipedia.org/wiki/Symmetric_multiprocessing
en.wikipedia.org/wiki/Fortran
www.g95.org
www.intel.com/cd/software/products/asmo-na/eng/282048.htm
www.mcs.anl.gov/index.php
www.gromacs.org
www.apple.com/macbookpro
www.abinit.org
www.abinit.org/package/?text=5_6_5
www.open-mpi.org/community/lists/users/2008/12/7522.php
www.open-mpi.org/community/lists/users/2008/12/7531.php
www.open-mpi.org/community/lists/users/2008/12/7536.php
www.open-mpi.org/community/lists/users/2008/12/7539.php
www.open-mpi.org/nightly/trunk
www.opensuse.org/en
fedoraproject.org
en.wikipedia.org/wiki/C_shell
en.wikipedia.org/wiki/Secure_Shell
www.open-mpi.org/software/ompi/v1.2

The Cryogenic Terahertz Spectrum Of (+)-Methamphetamine Hydrochloride And Assignment Using Solid-State Density Functional Theory

Sunday, March 8th, 2009

In press in the Journal of Physical Chemistry.  This paper on the low-frequency vibrational properties of methamphetamine marks a transitional point in the simulation of terahertz (THz) spectra by density functional theory (DFT), as both Crystal06 and Abinit provide the means to calculating infrared intensities in the solid-state by a more rigorous method than the difference-dipole method that has been used in the many previous THz papers with DMol3 (performed externally from the DMol3 program proper).  The original manuscript came back with two important comments from Reviewer 3 (that crazy Reviewer 3.  Is there nothing they’ll think of to critique?).

The best-fit spectral assignment by visual inspection (BOP/DNP level of theory) and by statistical analysis (BP/DNP level of theory) are shown below (the paper, of course, contains significantly more on this point).  With these two spectral simulations in mind, Reviewer 3 presented the following analysis that I think is certainly worth considering generally to anyone new to the computational chemistry game and even by general practitioners who might risk becoming complaisant in their favorite theoretical technique.  There’s a reason we refer to the collection of computational quantum chemical tools as the “approximate methods.”

I have difficulty with what appears to be a generalization of the applicability of using density functional for modeling THz spectra… It is disturbing that the different functionals will generate different numbers of modes within the spectral region, and it is hard to imagine how we should move forward with density functional for calculating spectra of this type.  In fact, it is true that one needs to include the “lattice” to get the spectra right in these regions, but it is not obvious that DFT will provide the level of rigor required to develop a predictive capability. Furthermore, given the “uncertainties regarding the number of modes”, is it possible that the mode assignments are invalid?

In my opinion, the authors point out the need for solid-state DFT, but should point out that in its current incarnation, that DFT is currently inadequate for quantitative comparison with experiment, and that more work needs to be done with the theory to make it quantitative.

The response to the reviewer about these points goes as such:

We agree completely with the reviewer’s criticism on these points of spectral reproduction, but we also believe that there should be a sharp separation between the capabilities of the DFT formalism and the capabilities of the many empirically-derived density functionals that currently make up the complement of “tools” within the DFT formalism.  Unlike the selection of basis set, which we often presume will improve agreement because of the improvement to the description of the electronic wavefunction that comes from additional functions, it is the case (specifically among the survey studies in THz simulations performed by the authors in this and previous publications) among the currently available GGA density functionals that the reproduction of the physical property under consideration is determined by the functional.  We also know that the reproduction of the lowest-energy solid-state vibrational features in molecular solids were NOT part of the initial complement of metrics used in gauging the accuracy of density functionals, so it is clear that we are performing survey calculations using available tools to determine which tools may be most reliably employed for performing THz assignments while not actively engaged in the development of new tools.  In the simulation of vibrational spectra, it is clear that we can never entirely trust the simulations until it is known unambiguously by experimentalists exactly what the motion associated with each vibrational mode is, which brings up the need for polarization experiments, Raman experiments to complement the mode assignments, etc.  Such rigorous detail for this region of the spectrum is very likely not known for a great many molecules of interest by the communities most interested in the benefits of THz spectroscopy.

In the meantime and in the absence of “complete datasets,” we agree with all of the reviewers (to a point either addressed directly or indirectly through questions along the same vein) that the best that a theoretical survey like the one presented here can do is aid in the generation of a functional consensus view, which is something that requires mode-by-mode analyses as mentioned by the reviewer.

Patrick M. Hakey, Damian G. Allis, Wayne Ouellette, and Timothy M. Korter

Department of Chemistry, Syracuse University, Syracuse, NY 13244-4100

Abstract: The cryogenic terahertz spectrum of (+)-methamphetamine hydrochloride from 10.0 – 100.0 cm-1 is presented, as is the complete structural analysis and vibrational assignment of the compound using solid-state density functional theory. This cryogenic investigation reveals multiple spectral features not previously reported in room-temperature terahertz studies of the title compound. Modeling of the compound employed eight density functionals utilizing both solid-state and isolated-molecule methods. The results clearly indicate the necessity of solid-state simulations for the accurate assignment of solid-state THz spectra. Assignment of the observed spectral features to specific atomic motions is based upon the BP density functional, which provided the best-fit solid-state simulation of the experimental spectrum. The seven experimental spectral features are the result of thirteen infrared-active vibrational modes predicted at a BP/DNP level of theory, with more than 90% of the total spectral intensity associated with external crystal vibrations.

pubs.acs.org/journal/jpcafh
en.wikipedia.org/wiki/Methamphetamine
en.wikipedia.org/wiki/Time_domain_terahertz_spectroscopy
en.wikipedia.org/wiki/Density_functional_theory
www.physics.ohio-state.edu/~aulbur/dft.html
www.crystal.unito.it
www.abinit.org
www.somewhereville.com/?cat=8
accelrys.com/products/materials-studio/modules/dmol3.html
link.aip.org/link/?JCPSA6/110/10664/1
link.aps.org/doi/10.1103/PhysRevA.38.3098
en.wikipedia.org/wiki/Computational_chemistry
chemistry.syr.edu

Obligatory

  • CNYO

  • Sol. Sys. Amb.

  • Salt City Miners

  • Ubuntu 4 Nano

  • NMT Review

  • N-Fact. Collab.

  • T R P Nanosys

  • Nano Gallery

  • nano gallery
  • Aerial Photos

    More @ flickr.com

    Syracuse Scenes

    More @ flickr.com