NanoHive@Home’s Published Results (Finally): Analysis Of Diamondoid Mechanosynthesis Tooltip Pathologies Generated Via A Distributed Computing Approach

Published in the Journal Of Computational And Theoretical Nanoscience. This paper has been as delayed in posting as the accepted article was long in printing, which was less time than the wait for the completion of the manuscript, which itself was massive compared to the time of the experiments themselves, which was fractional compared to how long it would have been without the NanoHive@Home crew that donated so much compute time to the project so long ago. First off…


This work would not have been possible without the enormous contribution of the NanoHive@Home participants, composed of over 6,000 worldwide volunteers and their computers.

For those that were part of the NHAH community and want to see what the final work looks like, please drop me a line [] so we can properly settle up.

The Paper Itself

Analysis Of Diamondoid Mechanosynthesis Tooltip Pathologies Generated Via A Distributed Computing Approach

Damian G. Allis,a* Brian Helfrich,b Robert A. Freitas Jr.c, and Ralph C. Merklec

a. Department of Chemistry, Syracuse University, Syracuse, NY 13244, USA
b. Helcorp, Maplewood, NJ, 07040, USA
c. Institute for Molecular Manufacturing, Palo Alto, CA 94301, USA

The results of a combined molecular dynamics/quantum chemistry pathology study of previously reported organic (diamondoid) tooltips for diamondoid mechanosynthesis (DMS) are presented. This study, employing the NanoHive@Home (NH@H) distributed computing project, produced 80,000 tooltip geometries used in 200,000 calculations optimized at either the RHF/3-21G or RHF/STO-3G levels of theory based on geometries obtained from high-energy molecular dynamics simulations to produce highly deformed starting geometries. These 200,000 calculations have been catalogued, grouped according to energies and geometries, and analyzed to consider potentially accessible defect structures (pathologies) for tooltip geometries either binding a carbon dimer (C2) feedstock or not containing the transported dimer feedstock. The transport and deposition of feedstock and the stability of the tooltip between dimer “loading” cycles are important geometries that must be considered as part of a tooltip stability analysis. The NH@H framework is found to be a useful method both for the study of highly deforming covalent geometries and, using lower-temperature MD simulations, for generating and optimizing molecular conformations (demonstrated using biotin, n-heptane, and n-octane in this study). The results of the pathology survey are discussed and general considerations for the exploration of DMS tooltip usability are explored.

A Few Visuals From The Article

The purpose of the study was to explore the conformation space of potential tooltips for use in mechanosynthetic operations. Anyone in the Advanced Molecular Manufacturing (AMM) community will recognize something like…

The tooltips themselves used for the deposition of carbon dimers were hammered on by the Q-SMAKAS (Quantum Search for Minimum Alternatives in Kinetically-Accessible Space) methodology (no, that wasn’t easy to make up), producing possible defect structures to explore how these tips might fall apart in an experimental apparatus. These tips were taken from the original survey paper [R. A. Freitas Jr., D. G. Allis and R. C. Merkle, “Horizontal Ge-Substituted Polymantane-Based C2 Dimer Placement Tooltip Motifs for Diamond Mechanosynthesis,” J. Comput. Theor. Nanosci. 4, 433 (2007)] and the DC10c study [D. G. Allis and K. E. Drexler, “Design and Analysis of a Molecular Tool for Carbon Transfer in Mechanosynthesis,” J. Comput. Theor. Nanosci. 2, 45 (2005) – and this one’s available as a free download from HERE]. Much to my surprise, there’s a section of a book available for background on google: Tip-Based Nanofabrication: Fundamentals and Applications, by Ampere A. Tseng.

Tip conformation survey. Click on the image for a larger version.

And for the non-AMM crowd, the same methodology of hammering on molecules in 3000 K to generate structural isomers can be employed at 300 K for the generation of conformational isomers, which was used to great success to explore the conformation-space of simple linear hydrocarbons and the far more interesting molecule biotin

Biotin conformation survey. Click on the image for a larger version.

And Some Pick Hits From The Discussion

4.1 Unloaded Tooltips And Ge-Ge Bond Formation

Any stabilizing interactions within the open tooltip may serve to increase the barriers to structural rearrangement, H migration, etc. between the time a C2 dimer is deposited on a workpiece and the time the tooltip is recharged with a new C2 dimer.

4.2 More Stable Pathologies And Transition State Barriers

The identification of a more stable structure does not provide any insight into the energy barrier over which an operational mechanosynthetic geometry must pass in order to convert into a non-operational geometry.

Identifying from among the failure modes within some energy range which of the operational tooltip structures that are accessible within a thermal regime in a working system is a time-consuming but very important subsequent step in any continued developmental survey of these tooltips.

4.3 Hydrogen-Inverted Tooltip Geometries And Larger Tooltip Frameworks

These largely-ignored tooltip pathologies are the results of hydrogen inversion, a type of defect that finds one or more H atoms inserting into the cage framework as a result of a methodological mismatch of atom momentum and classical time step in the MD simulations … Three common workarounds for large H atom displacements per time step are (1) the use of smaller time steps to recalculate the forces on the H atoms, (2) the artificial increase of the mass of the H atoms to reduce their net displacement over some set time step (such as re-massing H atoms to deuterium or higher), and (3) the subsuming of the H atoms into the associated “heavy” atom to remove the H motion entirely from the simulation.

4.4 The NH@H Network And “Best-Practices” Considerations

The NH@H tooltip pathology survey generated a considerable amount of data and, after the analysis of the resulting data, yielded a selection of tooltip pathologies to serve as the basis for subsequent studies of tooltip defect pathways… The speed of a calculation and, ultimately, the quality of the calculation for a particular analysis are dictated by the quality of the computers owned by the participants… The use of a survey of representative available computers at the start of a series of quantum chemistry studies can be of great benefit in identifying the constraints a researcher must place on their investigation.

4.5 Identified Minima vs. Accessible Minima

In the absence of transition state calculations or MD simulations on larger tooltip frameworks to remove degrees of freedom in some atoms, it is not known how many of these tooltips can be ignored simply for energetic reasons — either due to large rearrangement barrier energies or due to one-step inaccessibility because of the presence of multiple barriers to complete a rearrangement to a predicted minimum.

4.6 Hydrogen Migration

In most stable tooltips, H migration is assumed to be the most accessible route to the formation of inoperative structures (what previous studies [R. A. Freitas Jr., D. G. Allis and R. C. Merkle, J. Comput. Theor. Nanosci. 4, 433 (2007)] have called “hydrogen poisoning”).

4.7 Ge-C Framework Bond Breaking

All of the tooltips in this study are designed to facilitate dimer deposition and tooltip retraction with the only modification to the covalent framework of the tooltip being the loss of the C2 dimer… The defects identified in the NH@H study with broken Ge-Cframework bonds may not themselves be accessible in isolation, but the additional bonding modes and resulting strain in the entire system as part of a mechanosynthetic operation makes such defect modes important in the overall design analysis of a diamondoid structure that is to be fabricated.

4.8 Combined Ge-C(framework) Bond Breaking And C=C pi-Formation

The formation of broken Ge-C/C=C bond pathologies are noteworthy, both in the context of the operational issue described above and in the manner by which a symmetric bond breaking in these tooltips can produce very stable geometries that then require large transition state barriers to be present for the mechanosynthetic operability of the tooltip.

4.9 Ge-Ge Bond Formation

As noted in the Hydrogen Migration section, the formation of these strained bonds in the UT structures may not be detrimental to tooltip operation but may, by increasing the barrier to other defect modes, serve as a form of stabilization during the time between deposition and C2 dimer recharging.

4.10 C2 Positional Variation

Remarkably, despite the high temperatures and otherwise large deformations identified in many of the tooltip structures, specific structural deformations at the Ge-[C2 dimer]-Ge position were identified in only a few cases from the MD-based quantum chemistry optimizations… Here and generally, both the quality of the basis set and the inclusion of electron correlation (by way of the B3LYP density functional) are expected to make a considerable difference in the accuracy of the estimated relative energies.

4.11 Conformational Differences

In most rigid tooltip designs, the identification of conformational minima is interesting but otherwise not of significance, as conformational flexibility at the tooltip base is, like H inversion, removed from all structures as a result of embedding the structure within a larger and more constraining tooltip framework… This approach to tooltip design optimization via conformational control of the base has not been considered in previous studies and is one of the more interesting results to emerge from this initial NH@H study.

Additional Assorted

A few documents from the original website are reposted here in PDF format for historical purposes (providing a bit more context. If you were part of the original @Home crowd and participated in any of the forum discussions, all of the text above likely makes some amount of sense).

2011december10_NanoHive1andNanoHiveAtHome.pdf [download]
2011december10_QSMAKAS.pdf [download]

NanoHive-1 (NH1) is a modular simulator created by Brian Helfrich which is used for modeling the physical world at a nanometer scale. The intended purpose of the simulator is to act as a tool for the study, experimentation, and development of nanotech entities. NanoHive-1 is a GPL/LGPL licensed open-source development – you can download and use it for free. NanoHive-1 can be run stand-alone, or easily integrated to support other applications such as CAD tools.

NanoHive@Home (NHAH) is a distributed computing system also created by Brian Helfrich based on the BOINC platform that was used for large-scale nanotech systems simulation and analysis; drawing its computing power from otherwise idle computers sitting in people’s homes. The goal of NanoHive@Home was to perform large-scale nanosystems simulation and analysis that was otherwise too intensive to be calculated via normal means, and thereby enable further scientific study in the field of nanotechnology.

The Tooltip Failure Mode Search Project, conducted by Brian Helfrich and Dr. Damian Allis ran on NHAH from February 2007 through May 2007. It utilized computing cycles donated from over 6,000 computers worldwide and reached a peak performance of nearly 3 teraFLOPS. Here are links to the explanations and results of the project:


Amber And Ubuntu Part 2. Amber10 (Parallel Execution) Installation In Ubuntu 8.10 (Intrepid Ibex) With OpenMPI 1.3… And Commentary

After considerable trial and building/testing errors, what follows is as simplified a complete installation and (non-X11/QM) testing of Amber10 and OpenMPI 1.3 as I think can be procedure’d in Ubuntu 8.10 (and likely previous and subsequent Ubuntu versions), dealing specifically with assorted issues with root permissions and variable definitions as per the standard procedure for Amber10 installation.

I’ll begin with the short procedure and bare minimum notes, then will address a multitude of specific problems that may (did) arise during all of the build procedures.  The purpose for listing everything, it is hoped, is to make these errors appear in google during searches so that, when you come/came across the errors, your search will have provided some amount of useful feedback (and, for a few of the problems I had with previous builds of other programs, this blog is the ONLY thing that comes up in google).

Some of the content below is an extension of the single-processor build of Amber10 I posted previously.  In the interest of keeping the reading to a minimum, the short procedure below is light on explanations that are provided in the long procedure that follows.  If you’re running on a multi-core computer (and who isn’t anymore), you likely want to take advantage of the MPI capability, so I would advise simply following the procedure below and IGNORING the previous post (as I also state at the top of the previous page).

Enough blabbering.


Text in black – my ramblings.

Text in bold preformatted red - things you will type in the Terminal

Text in green – text you will either see or will type into files (using pico, my preference)

Amber10, OpenMPI 1.3, Ubuntu 8.10: The Easy, Teenage New York Version

0a. I assume you’re working from a fresh installation of Ubuntu 8.10.  Some of what is below is Ubuntu-specific because of the way that Ubuntu decides to deal with the Root/Administrator/User relationship.  That said, much is the same and I’ll try to remark accordingly.

0b. As with the previous post, I am writing this with an expected audience of a non-technical Linux user, which means more detail of the steps and not just “cd $HOME and ./config -> make -> make install.”  If the steps are too obvious to you, find someone who thinks a Terminal window is one that won’t close and let them do it.  Speaking of…

0c. Everything below is done from a Terminal window.  The last icon you will use is the Terminal icon you click on.  If you’ve never had the pleasure in Ubuntu Desktop, go to Applications -> Accessories -> Terminal.  You can save yourself some time dragging around menus by hovering over the Terminal icon, right-clicking, and “Add this launcher to panel.”

0d. One more thing for the absolute newbies.  I am assuming that you’ve used Firefox to download OpenMPI 1.3, the AmberTools 1.2 and Amber10 bz2 files, and the bugfix.all files for AmberTools 1.2 and Amber10 (more on that in 0e).  That’s all you’ll need for the installation below.  Note that I’m not dealing with X11 (library specifications and other testing to do first), MKL (the Intel Math Kernel Library), or the GOTO BLAS libraries.  The default download folder for Firefox in a fresh installation of Ubuntu is the Desktop.  I will assume your files are on the Desktop, which will then be moved around in the installation procedure below.

0e. bugfix.all (Amber10) and bugfix.all (AmberTools 1.2) – Yes, the Amber people provide both of these files for the two different programs using the same name.  As I assume you’re using Firefox to download these files from their respective pages, I recommend that you RENAME the bugfix.all file for AmberTools 1.2 to bugfix_at.all, which is the naming convention I use in the patching step of the installation process (just to keep the confusion down).  In case you’ve not had the pleasure yet, instead of clicking on the bugfix.all link on each page and saving the file that loads, simply right-click and “Save Link As…”, then save the AmberTools 1.2 bugfix.all file as bugfix_at.all.

0f. This OpenMPI 1.3 build and series of Amber10 tests assumes SMP (symmetric multi-processing) only, meaning you run only with the CPUs on your motherboard.  The setup of OpenMPI for cluster-based computing is a little more complicated and is not presented below (but is in process for write-up).

0g. And, further, I am not using the OpenMPI you can install by simply sudo apt-get install libopenmpi1 linopenmpi-dev openmpi-bin openmpi-doc for two reasons.  First, that installs OpenMPI 1.2.8 (I believe), which mangles one of the tests in such a way that I feared for subsequent stability in Amber10 work I might find myself doing.  Second, I want to eventually be able to build OpenMPI around other Fortran compilers (specifically g95 or the Intel Fortran Compiler) or INSERT OPTION X and, therefore, prefer to build from source.

So, from a Terminal window, the mostly comment-free/comment-minimal and almost flawless procedure is as follows.  Blindly assume that everything I am doing, especially in regards to WHERE this build is occurring and WHAT happens in the last few steps, is for a good reason (explained in detail later)…

0. cd $HOME (if you’re not in your $HOME directory already or don’t know where it is)

1. sudo apt-get update (requires administrative password)

2. sudo apt-get install ssh g++ g++-multilib g++-4.3-multilib gcc-4.3-doc libstdc++6-4.3-dbg libstdc++6-4.3-doc flex bison fort77 netcdf-bin gfortran gfortran-multilib gfortran-doc gfortran-4.3-multilib gfortran-4.3-doc libgfortran3-dbg autoconf autoconf2.13 autobook autoconf-archive gnu-standards autoconf-doc libtool gettext patch libblas3gf liblapack3gf libgfortran2 markdown csh (this is my ever-growing list of necessary programs and libraries that are not installed as part of a fresh Ubuntu installation)

3. pico .bashrc

Add the following lines to the bottom of this file.  This will make more sense shortly…


export MPI_HOME

Crtl-X and the Enter Key twice to Exit

4. source .bashrc

5. pico. profile

Add the following line to the bottom of this file.


Crtl-X and the Enter Key twice to Exit

6. source .profile

7. mv $HOME/Desktop/Amber* $HOME/Documents/ (this assumes the downloaded files are on the desktop)

8. mv $HOME/Desktop/openmpi-1.3 $HOME/Documents/ (this assumes the downloaded files are on the desktop)

9. cd $HOME/Documents/

10. gunzip openmpi-1.3.tar.gz

11. tar xvf openmpi-1.3.tar

12. cd openmpi-1.3

13. ./configure –prefix=/ (installs the MPI binaries and libraries into /. Avoids library errors I ran across despite MPI_HOME, but may be fixable. Copious output to follow.  For my results, click HERE)

14. sudo make all install (this installs binaries and libraries into “/”. Copious output to follow.  For my results, click HERE)

15. cd $HOME/Documents/

16. tar xvjf Amber10.tar.bz2

17. tar xvjf AmberTools-1.2.tar.bz2

18. mv $HOME/Desktop/bugfix* $HOME/Documents/$AMBERHOME (moves bugfix files to the Amber directory)


20. patch -p0 -N -r patch-rejects < bugfix_at.all (patches AmberTools)

21. patch -p0 -N -r patch-rejects < bugfix.all (patches Amber10)

22. cd src/

23. ./configure_at -noX11

24. make -f Makefile_at (copious output to follow.  For my results, click HERE)

25. cd ../bin/

26. pico

At the top of the file, change sh to bash

Crtl-X and the Enter Key twice to Exit

27. cd ../test/

28. make -f Makefile_at test (copious output to follow.  For my results, click HERE)

29. cd ../src/

30. ./configure_amber -openmpi gfortran

31. make parallel (copious output to follow.  For my results, click HERE)

32. cd $HOME/.ssh/ (at this step, we allow auto-login in ssh so that the multiple mpirun tests do not require that you supply your password constantly)

33. ssh-keygen -t dsa

34. cat >> authorized_keys2

35. chmod 644 authorized_keys2

36. cd $AMBERHOME/test/

37. csh

38. setenv DO_PARALLEL ‘mpirun -np N (here, N is the number of processors you wish to use on your mobo)

39. make test.parallel.MM (copious output to follow.  For my results, click HERE)

40. exit (exits the csh shell)

41. sudo cp -r $HOME/Documents/amber10 /opt/amber10

42. cd $HOME

43. rm -r $HOME/Documents/amber10 (this deletes the build directory.  Delete or keep as you like)

44. pico .bashrc

Make the following change to the bottom of this file.


Crtl-X and the Enter Key twice to Exit

45. source .bashrc

46. pico .profile

Make the following change to the bottom of this file.


Crtl-X and the Enter Key twice to Exit

47. source .profile

That is it!  In theory, you should now have a complete and tested Amber10 and AmberTools 1.2 build sitting in /opt/amber10.

Amber10, OpenMPI 1.3, Ubuntu 8.10: The Notes

What follows is the complete list of problems, questions, errors, issues, and general craziness from much trial and error for the installation procedure above.  As you can guess from my constant mentioning of Ubuntu in my statements-with-qualifications, some of these problems likely will not occur in other distros.  My primary reason for the extended discussion below is so that the errors and issues make their way into google so that people searching for fixes to these problems (if they come across them) will see actual content (if they choose to read it) and maybe find a reasonable fix.

I’ll be expanding in sections by grouping numbers above.

0a – 0g Installation Preparations

I’ve not much to add here except that OpenMPI is likely not the only way to install MPI Amber10 in Ubuntu, but I think it is easier than MPICH2 to set up cross-cluster calculations on a switch’ed network.  My stronger preference for OpenMPI stems from both past positive experience with OpenMPI and GROMACS (specifically on my Macbook Pro) and eventual success with OpenMPI 1.2.X and Abinit 5.6.5.  I had hoped to use the same version of OpenMPI for GROMACS, Amber10, and Abinit, but ran into a yet-to-be-resolved issue with OpenMPI 1.3.x in the building of Abinit (the problem is, apparently, resolved in the upcoming 1.4.1 build, but I’m not much for using release candidates.  I’ll be discussing this in an upcoming Abinit installation post based on my previous Abinit installation post).

1 – 2 apt-get

My listed apt-get installation set contains many, many programs and libraries that are not necessarily needed in the OpenMPI and Amber10 build but are required for other programs.  The apt-get approach is still much cleaner than installing the entire OpenSuse or Fedora DVD, but you do find yourself scrambling the first time you try to install anything to determine what programs are missing.  You don’t know you need csh installed until the test scripts fail.  You forget about ssh until you run mpirun for the first time.  I do not yet know if MKL, GOTO, or any of the X11-based AmberTools programs require additional libraries to be installed, so the above apt-get list may grow.  Check the comments section at bottom for updates.

The list below shows essential programs and libraries AND their suggested additional installs. As long as you’re online and apt-get‘ing anyway, might as well not risk missing something for your next compiling adventure.

g++ g++-multilib  g++-4.3-multilib  gcc-4.3-doc  libstdc++6-4.3-dbg  libstdc++6-4.3-doc

gfortran gfortran-multilib gfortran-doc gfortran-4.3-multilib gfortran-4.3-doc libgfortran3-dbg

autoconf autoconf2.13 autobook autoconf-archive gnu-standards autoconf-doc libtool gettext

flex bison

ssh csh patch markdown fort77 netcdf-bin libblas3gf liblapack3gf libgfortran2

3 – 9 .bashrc and .profile Modifications, Building In Your Own Directory

For a number of programs, the procedure from source is ./configure, make, and make install, make install often only responsible for moving folders into the correct directories (specifically, directories only root has access to, such as /usr/local/bin, /lib, and /opt).  In many distributions, this final step is actually sudo make install.  This division of make and make install is not preserved in Amber, which complicates the Ubuntu build slightly.  The building in $HOME/Documents (as I’ve described the procedure above) saves you from having to constantly sudo the extraction and building process in directories you, the typical user, do not have access to write into.

Working in $HOME\Documents (or any of your $HOME folders) allows for the complete build and testing of AmberTools and Amber10.

The other benefit from doing as much in $HOME as possible is the lack of a need to define variables as the root user (specifically, AMBERHOME, MPI_HOME, and DO_PARALLEL) by setting variables in the root .bashrc and .profile files, adding lines to the Makefiles for AmberTools and Amber10, or setting variables at prompts.  This variable specification issue arises because when you run a program with sudo, you invoke the root privileges and the root variable definitions, so any specifications you make for your PATH or these Amber-specific variables are lost.

Once the build is complete in the $HOME/Documents folder, we move the entire directory into /opt, my default location for all programs I build from source (but $HOME/Documents is just fine as well once the PATH is set).

10 – 14 OpenMPI 1.3

So, why not then use OpenMPI 1.2.x?  The build of Amber10 works just fine with OpenMPI 1.2.x in Ubuntu with all of the installation specifications described in Step 2 of the short procedure (the extensive apt-get).  The problem with 1.2.x occurs for a single test after the Amber10 build that I’ve not yet figured out a workaround for, but the error is sinister enough that I decided to not risk similar errors in my own Amber work and, instead, use OpenMPI 1.3.x, which does not suffer the same error.  The only (ONLY) test to fail with OpenMPI 1.2.x is the cnstph (constant pH simulation) test, which runs perfectly well but fails at the close of the calculation (you can test this yourself by changing the nstlim value in the mdin file to any arbitrarily large number).  The failure message, which kills the test series, is below.  This job also fails if you simply try to run it independently of the pre-defined test set (not the make test.parallel.MM, but cd’ing into the cnstph directory, setting variables, and running the script).

cd cnstph && ./Run.cnstph
[ubuntu-desktop:18263] *** Process received signal ***
[ubuntu-desktop:18263] Signal: Segmentation fault (11)
[ubuntu-desktop:18263] Signal code: Address not mapped (1)
[ubuntu-desktop:18263] Failing at address: 0x9069d2cb0
[ubuntu-desktop:18264] *** Process received signal ***
[ubuntu-desktop:18265] *** Process received signal ***
[ubuntu-desktop:18265] Signal: Segmentation fault (11)
[ubuntu-desktop:18265] Signal code: Address not mapped (1)
[ubuntu-desktop:18265] Failing at address: 0x907031c70
[ubuntu-desktop:18264] Signal: Segmentation fault (11)
[ubuntu-desktop:18264] Signal code: Address not mapped (1)
[ubuntu-desktop:18264] Failing at address: 0x906dbdc00
[ubuntu-desktop:18263] [ 0] /lib/ [0x2b17b5bef0f0]
[ubuntu-desktop:18263] [ 1] /usr/local/lib/ [0x2b17b4c10937]
[ubuntu-desktop:18263] [ 2] /usr/local/lib/ [0x2b17b4c122bb]
[ubuntu-desktop:18263] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18264] [ 0] /lib/ [0x2abd8f5810f0]
[ubuntu-desktop:18264] [ 1] /usr/local/lib/ [0x2abd8e5a2937]
[ubuntu-desktop:18264] [ 2] /usr/local/lib/ [0x2abd8e5a42bb]
[ubuntu-desktop:18264] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18264] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18264] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18264] [ 6] /lib/ [0x2abd8f7ad466]
[ubuntu-desktop:18264] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18264] *** End of error message ***
[ubuntu-desktop:18263] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18263] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18263] [ 6] /lib/ [0x2b17b5e1b466]
[ubuntu-desktop:18263] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18263] *** End of error message ***
[ubuntu-desktop:18265] [ 0] /lib/ [0x2ba0032600f0]
[ubuntu-desktop:18265] [ 1] /usr/local/lib/ [0x2ba002281937]
[ubuntu-desktop:18265] [ 2] /usr/local/lib/ [0x2ba0022832bb]
[ubuntu-desktop:18265] [ 3] /home/userid/Documents/amber10/exe/sander.MPI(sander_+0x73ce) [0x4c6ec2]
[ubuntu-desktop:18265] [ 4] /home/userid/Documents/amber10/exe/sander.MPI(MAIN__+0xf0a) [0x4bfa66]
[ubuntu-desktop:18265] [ 5] /home/userid/Documents/amber10/exe/sander.MPI(main+0x2c) [0x88309c]
[ubuntu-desktop:18265] [ 6] /lib/ [0x2ba00348c466]
[ubuntu-desktop:18265] [ 7] /home/userid/Documents/amber10/exe/sander.MPI [0x43a649]
[ubuntu-desktop:18265] *** End of error message ***
mpirun noticed that job rank 0 with PID 18262 on node ubuntu-desktop exited on signal 11 (Segmentation fault).
3 additional processes aborted (not shown)
./Run.cnstph:  Program error
make[1]: *** [test.sander.GB] Error 1
make[1]: Leaving directory `/home/userid/Documents/amber10/test’
make: *** [test.sander.GB.MPI] Error 2

Errors like this one scare me to no end, especially when the error seems to be in the proper termination of a process (such as writing final positions or data files) and you risk such errors occurring after 2 week simulations with no way to get your data back.  If you decide (if it’s already installed, for instance) to use OpenMPI 1.2.x with Amber10 but still want to test the build, I’d suggest simply commenting out (#) the line

# cd cnstph && ./Run.cnstph

from the Makefile in the ../test directory.  I can’t imagine this happens in all other distributions, but I also don’t know what the problem could be given that all of the other tests work just fine.  That said, there’s a failed test for the OpenMPI 1.3.x build as well when you forget to run the tests from csh, but that has nothing to do with OpenMPI (see below).

16 – 19 Building Amber10 and AmberTools At $HOME

As described in the 3 – 9 section above, the problem with not building in your $HOME directory is the passing of variables in the build processes.  For instance, if you set MPI_HOME in your $HOME .bashrc file and then sudo make parallel, the error you’ll see is

Starting installation of Amber10 (parallel) at Mon Mar  9 22:30:36 EDT 2009.
cd sander; make parallel
make[1]: Entering directory `/opt/amber10/src/sander’
cpp -traditional -I/usr/local/include -P -xassembler-with-cpp -Dsecond=ambsecond -DBINTRAJ -DMPI  constants.f > _constants.f
/usr/local/bin/mpif90 -c -O3 -fno-range-check -fno-second-underscore -ffree-form  -o constants.o _constants.f
Cannot open configuration file /usr/share/openmpi/mpif90-wrapper-data.txt
Error parsing data file mpif90: Not found
make[1]: *** [constants.o] Error 243
make[1]: Leaving directory `/opt/amber10/src/sander’
make: *** [parallel] Error 2

because the MPI_HOME variable is not specified for root.  Performing the compilation in $HOME/Documents avoids this issue.

If you want to build Amber and AmberTools in /opt as root for some reason and do not want to deal with modifying the .bashrc and .profile files in /root, you can modify the appropriate Amber files to define the variables needed for both building and testing. For AmberTools testing, you need to define AMBERHOME, which you do at the top of Makefile_at.

cd ../test/

sudo pico Makefile_at

include ../src/config.h


test: is_amberhome_defined \

sudo make -f Makefile_at test

For the Amber10 build process, you would need to modify configure_amber at the top of the file to specify the MPI_HOME variable.

sudo pico configure_amber

#set -xv

export MPI_HOME

command=”$0 $*”

For the Amber10 testing process, you would need to assign both AMBERHOME (so the tests know where to look for the executables) and DO_PARALLEL (so the tests know to use OpenMPI) at the top of the file.

cd ../test/

sudo pico Makefile

include ../src/config_amber.h


DO_PARALLEL=mpirun -np 4


It otherwise makes no difference at all how you choose to do things so long as the program gets built.  Much of the Ubuntu literature I’ve stumbled across attempts to make people avoid doing anything to change the root account settings, which was the approach I chose to use in the “Easy” procedure above.

20 – 21 Patching Amber10 and AmberTools 1.2

No surprises and not necessary for building.  Do it anyway.  And patch is included as one of the apt-get‘ed programs.  Simply be cognizant of the naming of the bugfix files (the directories are the same for both Amber10 and AmberTools and the patch is simply applied to the files it finds).

22 – 24 Building AmberTools

Your choices for the AmberTools build are fairly limited.  Via configure_at –help,

Usage: ./configure_at [flags] compiler

where compiler is one of:

gcc, icc, solaris_cc, irix_cc, osf1_cc

If not specified then gcc is used.

Option flags:

-mpi        use MPI for parallelization
-scalapack  use ScaLAPACK for linear algebra (utilizes MPI)
-openmp     Use OpenMP pragmas for parallelization (icc, solaris_cc,gcc(>4.2))
-opteron    options for solaris/opteron
-ultra2     options for solaris/ultra2
-ultra3     options for solaris/ultra3
-ultra4     options for solaris/ultra4
-bit64      64-bit compilation for solaris
-perflib    Use solaris performance library in lieu of LAPACK and BLAS
-cygwin     modifications for cygwin/windows
-p4         use optimizations specific for the Intel Pentium4 processor
-altix      use optimizations specific for the SGI Altix with icc
-static     create statically linked executables
-noX11      Do not build programs that require X11 libraries, e.g. xleap.
-nobintraj  Delete support for binary (netCDF) trajectory files
-nosleap    Do not build sleap, which requires unantiquated compilers.

Environment variables:
MKL_HOME    If present, will link in Intel’s MKL libraries (icc,gcc)
GOTO        If present, and MKL_HOME is not set, will use this location
for the Goto BLAS routines

We’re building gcc (default) with the above installs.  Running ./configure -noX11 with the apt-get-installed programs above should produce the following output.

Setting AMBERHOME to /home/userid/Documents/amber10

Testing the C compiler:
mpicc  -m64 -o testp testp.c

Obtaining the C++ compiler version:
g++ -v
The version is ../src/configure
[: 520: 3: unexpected operator

Testing the g77 compiler:
g77 -O2 -fno-automatic -finit-local-zero -o testp testp.f
./configure_at: 538: g77: not found
./configure_at: 539: ./testp: not found
Unable to compile a Fortran program using g77 -O2 -fno-automatic -finit-local-zero

Testing the gfortran compiler:
gfortran -O1 -fno-automatic -o testp testp.f

Testing flex:

Configuring netcdf; (may be time-consuming)

NETCDF configure succeeded.

The configuration file, config.h, was successfully created.

The next step is to type ‘make -f Makefile_at’

25 – 28 Testing AmberTools

As a quick head’s up, if you don’t have csh installed, your test will fail at the following step with the following error:

cd ptraj_rmsa && ./Run.rms
/bin/sh: ./Run.rms: not found
make: *** [test.ptraj] Error 127

Otherwise, you can see the output from my test set HERE.  Installing csh is much easier than modifying multiple Run scripts.

That said, the one fix I do perform is to modify the file only slightly in accordance with the post by Mark Williamson on the Vanderbilt University Amber Listserve (the first stumbling block a ran into during testing).

29 – 31 Building Parallel Amber10

Running ./configure_amber -openmpi gfortran should output the following:

Setting AMBERHOME to /home/userid/Documents/amber10

Setting up Amber configuration file for architecture: gfortran
Using parallel communications library: openmpi
The MKL_HOME environment variable is not defined.

Testing the C compiler:
gcc  -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -O2 -m64 -o testp testp.c

Testing the Fortran compiler:
gfortran -O0 -fno-range-check -fno-second-underscore -o testp testp.f

——   Configuring the netCDF libraries:   ——–

Configuring netcdf; (may be time-consuming)
NETCDF configure succeeded.
MPI_HOME is set to /

The configuration file, config_amber.h, was successfully created.

32 – 35 Automatic ssh Login

These step saves you from constantly having to input your password for the mpirun testing phase.  This strictness to password provision is because of ssh (and, because I have machines both online and accessible, I prefer to deal with setting up the ssh side right instead of not having that layer of security).  The first time you run mpirun, ssh will throw back at you the following:

The authenticity of host ‘userid-desktop (’ can’t be established.
RSA key fingerprint is eb:86:24:66:67:0a:7a:7b:44:95:a6:83:d2:a8:68:01.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘terahertz-desktop’ (RSA) to the list of known hosts.

Generating the automatic login file for ssh will look like the following:

Generating public/private dsa key pair.
Enter file in which to save the key (/home/userid/.ssh/id_dsa):
Created directory ‘/home/userid/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/userid/.ssh/id_dsa.
Your public key has been saved in /home/userid/.ssh/
The key fingerprint is:
54:84:68:79:3b:ec:17:41:c9:96:98:10:df:4f:cc:42 userid@userid-desktop
The key’s randomart image is:
+–[ DSA 1024]—-+

36 – 40 Testing Amber10

If you’re not in csh, either this test fails…

cd rdc && ./Run.dip
if: Badly formed number.
make[1]: *** [test.sander.BASIC] Error 1
make[1]: Leaving directory `/home/userid/Documents/amber10/test’
make: *** [test.sander.BASIC.MPI] Error 2

or this one…

cd pheMTI && ./Run.lambda0
This test must be run in parallel
make: *** [test.sander.TI] Error 1

Again, re-writing scripts is far less fun than sudo apt-get install csh and forgetting about it.

41 – 47 Moving The Built Amber10 and AmberTools

The final sequence of steps moves the $HOME/Documents-built amber10 into /opt (if you choose to), removes the build from your $HOME directory, and resets your PATH and AMBERHOME variables in .bashrc and .profile, thereby completing the build process.

And Finally…

If questions are raised, comments are thought of, speed-ups identified, etc., please either send me an email or post them here.  Our concern as computational chemists should be making predictions and interpreting data, not making compilation errors and interpreting error messages.

Metamodern – The Trajectory Of Technology

Eric Drexler is categorically the most knowledgeable and well-rounded scientist I have ever met. Period.

– From an interview with Sander Olsen for, November 2005.

Friend, mentor, and co-author K. Eric Drexler has begun a new blog and I am thrilled to have at the click of a mouse the thoughts of a founding father of my field who remains a strong voice for its progress by seeing beyond the boundaries that define scientific disciplines.

If all scientists had his mastery of insight and interdisciplinary mindset, I dare say we might be done by now.

And I subscribed to the feed, too.