Skip to topic | Skip to bottom
Home
BioGeometry
BioGeometry.OptEOnTopsailr1.1 - 11 Feb 2009 - 18:08 - Main.guesttopic end

Start of topic | Skip to actions
UNC getting started on Topsail page http://help.unc.edu/6214

* Load the gcc development suit module module load hpc/mvapich-gcc

* get mini source onto the scratch partition

On a local machine

cd /tmp
svn checkout https://svn.rosettacommons.org/source/trunk/mini mini
cd mini
svn checkout https://svn.rosettacommons.org/source/trunk/minirosetta_database database
cd ..
tar -czf mini.tar.gz mini
scp mini.tar.gz yourOnyen@topsail.unc.edu:/ifs1/scr/yourOnyen/
ssh yourOnyen@topsail.unc.edu
cd /ifs1/scr/yourOnyen
mkdir mini_optE
tar -xzf mini.tar.gz mini_optE/mini 
cd mini_optE
mv mini/database minirosetta_database
The scratch directory is auto-cleans files over 21 days old, so backup stuff to mass storage or afs

edit mini/tools/build/basic.settings by changing in the "gcc, mpi" section mpicc -> /opt/lam/gnu/bin/mpicc and mpiCC -> /opt/lam/gnu/bin/mpiCC

cd mini_optE/mini/external/lib
wget http://www.zlib.net/zlib-1.2.3.tar.gz
tar -xzf zlib-1.2.3.tar.gz
python scons.py extras=mpi mode=release bin

Getting MPI working with Topsail

$cd ifs1/scr/momeara/mini_optE/mini
$bsub -n 1 -q debug -o out_fixbb.%J -e err_fixbb.%J -a mvapich mpirun ./bin/fixbb.linuxgccrelease \
-s ifs1/scr/momeara/momeara/mini_optE/mini/test/integration/tests/fixbb/1l2y.pdb
# prints what job this submission  is
$bjobs <job number>
$cat err_fixbb.<job number>

#returns this
#-----------------------------------------------------------------------------
#
#It seems that there is no lamd running on the host cmp-51-9.local.
#
#This indicates that the LAM/MPI runtime environment is not operating.
#The LAM/MPI runtime environment is necessary for MPI programs to run
#(the MPI program tired to invoke the "MPI_Init" function).
#
#Please run the "lamboot" command the start the LAM/MPI runtime
#environment.  See the LAM/MPI documentation for how to invoke
#"lamboot" across multiple machines.
#-----------------------------------------------------------------------------

When

  module  load  hpc/mvapich-gcc
all sorts of stuff happen:

  • set environment variables: compiler locations, PMICH_HOME, PATH, MANPATH, LD_LIBRARY_PATH, INCLUDE. In particular "which mpiCC" returns "/usr/local/ofed/mpi/gcc/bin/mpiCC"

[momeara@topsail-login1 ~]$ echo $LD_LIBRARY_PATH
/usr/local/ofed/lib64:/usr/local/ofed/lib:/usr/local/ofed/mpi/gcc/lib:/opt/lsfhpc/6.2/linux2.6-glibc2.3-x86_64/lib:/opt/intel/fce/9.1.041/lib:/opt/intel/cce/9.1.047/lib
[momeara@topsail-login1 ~]$ echo $INCLUDE
/usr/local/ofed/include:/usr/local/ofed/mpi/gcc/include

Put all of those paths in site.settings in the "prepends" section so it looks like this:

settings = {
    "site" : {
        "prepends" : {
            # Location of standard and system binaries                                                                                                                                                                                      
            "program_path" : [
                "/usr/local/ofed/mpi/gcc/",
                # Path to GCC compiler if not in the os rule                                                                                                                                                                                
                # Path to Intel C++ compiler if not in the os rule                                                                                                                                                                          
            ],
            # Location of standard and system header files if not in the os rule                                                                                                                                                            
            "include_path" : [
                "/usr/local/ofed/include",
                "/usr/local/ofed/mpi/gcc/include",
                #                                                                                                                                                                                                                           
            ],
            # Location of standard and system libraries if not in the os rule.                                                                                                                                                              
            "library_path" : [
                "/opt/lsfhpc/6.2/linux2.6-glibc2.3-x86_64/lib",
                "/opt/intel/fce/9.1.041/lib",
                "/opt/intel/cce/9.1.047/lib",
                "/usr/local/ofed/lib64",
                "/usr/local/ofed/lib",
                "/usr/local/ofed/mpi/gcc/lib",
                "/opt/lsfhpc/6.2/linux2.6-glibc2.3-x86_64/lib",
                "/opt/intel/fce/9.1.041/lib",
                "/opt/intel/cce/9.1.047/lib"

                #                                                                                                                                                                                                                           
            ],
        },
        "appends" : {
        },
        "overrides" : {
        },
        "removes" : {
        },
    }
}

Then run

./scons.py extras=mpi,static mode=release -j7 bin

I get this warning message

/opt/mpich/gnu/lib/libmpich.a(p4_secure.o)(.text+0x278): In function `start_slave':
: warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/opt/mpich/gnu/lib/libmpich.a(p4_sock_util.o)(.text+0xcd8): In function `gethostbyname_p4':
: warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
Install file: "build/src/release/linux/2.6/64/x86/gcc/mpi-static/inv_kin_lig_loop_design.linuxgccrelease" as "bin/inv_kin_lig_loop_design.linuxgccrelease"

According to this bug report this is a problem with an out dated version of glibc.

To check if it built ok test it on an application that uses mpi:

$bsub -q debug -o output.%J -e err.%J -n 2 -a mvapich mpirun ./bin/optE_parallel.linuxgccrelease
$bjobs #to see how the job is running

$cat output.<job number>
...
The output (if any) follow:

core.init: command: ./optE_parallel.linuxgccrelease
core.init: 'RNG device' seed mode, using '/dev/urandom', seed=1556917344 seed_offset=0 real_seed=1556917344
core.init.random: RandomGenerator:init: Normal mode, seed=1556917344 RG_type=mt19937
core.options.util: Use either -s or -l to designate one or more start_files
core.init: command: ./optE_parallel.linuxgccrelease
core.init: 'RNG device' seed mode, using '/dev/urandom', seed=-537361992 seed_offset=0 real_seed=-537361992
core.init.random: RandomGenerator:init: Normal mode, seed=-537361992 RG_type=mt19937
core.options.util: Use either -s or -l to designate one or more start_files
...

-- MatthewOmeara - 04 Feb 2009
to top


You are here: BioGeometry > RosettaHydrogenBonds > OptEOnTopsail

to top

Copyright © 1999-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback