Running (Only) A Single-Point Energy Calculation In Crystal06/09, Proper Input Format For Long-Range Dispersion Contributions In Crystal09, And Removing The MPICH2 Content From The Output File In Pcrystal

Now enjoying the benefits of dispersion-corrected solid-state density functional theory (and a proper MPICH2 implementation for infrared intensity calculations, although this now a problem for reasons to be addressed in an upcoming post) in Crystal09, three issues in recent calculations caused me to think hard enough about keyword formats and job runs that I have opted to post briefly about what to do in case google and bing are your preferred methods of manual searching.

1. How To Run Only A Single-Point Energy Calculation In Crystal06/Crystal09

This had never come up before and, by the time I needed to find an input file to see what do to, the first google search provided Civalleri’s Total Energy Calculation page that currently has broken links to .zip files. There is quite a bit about the different geometry optimization approaches in the manual, but a search for “single-point” provides no information about what to do for only single-point energy calculations.

Continue reading “Running (Only) A Single-Point Energy Calculation In Crystal06/09, Proper Input Format For Long-Range Dispersion Contributions In Crystal09, And Removing The MPICH2 Content From The Output File In Pcrystal”

Installing And Mounting Network Drives Using NFS In Ubuntu (And Generally)

This is another piece in an Ubuntu puzzle that, when assembled, will describe how to set up an MPI (message passing interface) computer cluster for running parallel calculations (upcoming).  As a brief explanation of what’s going on, many of the MPI (OpenMPI, MPICH, MPICH2) set-up procedures you may stumble across online describe how to use the network file system (NFS) protocol to set up one directory on a host node (head node/server node/master node/whatever) of your cluster so that, by mounting a directory on a guest node (client node/slave node/whatever) to this network-accessible drive, the head and guest nodes all see the same work directory and executables (both MPI and your program of choice).  There are more clever ways to set the cluster up that will likely run at a slightly faster pace than NFS may allow, but we’ll ignore that at the moment.  The install procedure below is Ubuntu-specific only in the apt-get stage (NFS support is not part of the default installation).  After all of the components are installed (post-apt-get), the setup should be Linux-universal.

LEGEND

Text in black – my ramblings.

Text in bold red – things you will type in the Terminal

Text in green – text you will either see or will type into files (using pico, my preference)

Below is all I need to do in Ubuntu to do what I need it to do.  I’ll be dividing the installation procedure into HOST and GUEST sections for organizational purposes.

Continue reading “Installing And Mounting Network Drives Using NFS In Ubuntu (And Generally)”

Compiling Single-Precision And Double-Precision GROMACS 3.3.3 With OpenMPI 1.2.6 Under OSX 10.5 (Leopard)

The following is a cookbook for compiling single-precision and double-precision GROMACS 3.3.3 in OSX 10.5 (specifically on a MacBook Pro, but most likely general to all things running 10.5) with OpenMPI 1.2.6. My hope is that this saves someone several hours of unnecessary work trying to overcome an otherwise unknown incompatibility between MPICH2 and GROMACS 3.3.3 in Leopard (10.5).

The content of this page is a result of an unsuccessful several hours of trying to build GROMACS-MPI versions with MPICH2 1.0.6 and 1.0.7. MPICH2 compiles just fine with standard options, including a successful testing of mpd and mpdtrace. GROMACS 3.3.3 compiles just fine with standard options for both single-precision and double-precision. Compiling GROMACS with MPI support yields an mdrun file that fails in the following manner:

Continue reading “Compiling Single-Precision And Double-Precision GROMACS 3.3.3 With OpenMPI 1.2.6 Under OSX 10.5 (Leopard)”