[Contents] [Prev] [Next] [End]

Chapter 12. Customizing Batch Jobs for LSF

This chapter describes how to customize your batch jobs to take advantage of LSF and LSF Batch features.

Environment Variables

When LSF Batch runs a batch job it sets several environment variables. Batch jobs can use the values of these variables to control how they execute. The environment variables set by LSF Batch are:

The LSF Batch job ID number.
The full path name of the batch job file. This is a /bin/sh script on UNIX systems or a .BAT command script on Windows NT systems that invokes the batch job.
The list of hosts selected by LSF Batch to run the batch job. If the job is run on a single processor, the value of LSB_HOSTS is the name of the execution host. For parallel jobs, the names of all execution hosts are listed separated by spaces. The batch job file is run on the first host in the list.
The name of the batch queue from which the job was dispatched.
The name of the batch job as specified by the -J job_name argument to bsub. If this argument was not given, the job name is the actual batch command as specified on the bsub command line.
If this batch job was submitted with the -r option to bsub, has run previously, and has been restarted because of a host failure, LSB_RESTART is set to the value Y. If this is not a restarted job LSB_RESTART is not set.
The value of this parameter can be used by a queue or job-level pre-execution command so that the command can exit with this value, if it wants the job be aborted instead of being requeued or executed.
This variable is a list of exit values defined in the queue's REQUEUE_EXIT_VALUE parameter. If this variable is defined, a job will be requeued if it exits with one of these values. This variable is not set if the queue does not have REQUEUE_EXIT_VALUE defined.
This variable is defined if a Job Starter command is defined for the queue. See 'Job Starter'.
This variable is set to 'Y' if the job is an interactive job. An interactive job is submitted using the -I option to bsub. This variable is not defined if the job is not interactive. See 'Interactive Batch Job Support'.
The process ID of the job. This is always a shell script process that runs the actual job.
This is the directory on the submission host where the job was submitted. By default LSF Batch assumes a uniform user name and user ID space exists among all the hosts in the cluster, that is, a job submitted by a given user will run under the same user's account on the execution host. For situations where non-uniform user id/user name space exists, account mapping must be used to determine the account used to run a job. See 'User Controlled Account Mapping'.

Parallel Jobs

Each parallel programming package has different requirements for specifying and communicating with all the hosts used by a parallel job. LSF is not tailored to work with a specific parallel programming package. Instead, LSF provides a generic interface so that any parallel package can be supported by writing shell scripts or wrapper programs. Example shell scripts are provided for running PVM, P4, MPI, and POE programs as parallel batch jobs.

Getting the Host List

The hosts allocated for the parallel job are passed to the batch job in the LSB_HOSTS environment variable. Some applications can take this list of hosts directly as a command line parameter. For other applications you may need to process the host list. The following example shows a /bin/sh script that processes all the hosts in the host list, including identifying the host where the job script is executing.

# Process the list of host names in LSB_HOSTS

for host in $LSB_HOSTS ; do
handle_host $host

LSF comes with a few scripts for running parallel jobs under LSF Batch, such as pvmjob, poejob, mpijob, p4job, etc. These scripts are installed In the LSF_BINDIR as defined in lsf.conf file. You can modify these scripts to support more parallel packages.

Starting Parallel Tasks With lstools

For simple parallel jobs you can use the lstools commands to start parts of the job on other hosts. Because the lstools commands handle signals transparently, LSF Batch can suspend and resume all components of your job without additional programming.

The simplest parallel job runs an identical copy of the executable on every host. The lsgrun command takes a list of host names and runs the specified task on each host. The lsgrun -p option specifies that the task should be run in parallel on each host. The example below submits a job that uses lsgrun to run myjob on all the selected batch hosts in parallel:

% bsub -n 10 'lsgrun -p -m "$LSB_HOSTS" myjob'
Job <3856> is submitted to default queue <normal>.

For more complicated jobs, you can write a shell script that runs lsrun in the background to start each component.

Using lsmake to Run Parallel Batch Jobs

For parallel jobs that have a variety of different components to run, you can use lsmake. Create a makefile that lists all the components of your batch job and then submit the lsmake command to LSF Batch. The following example shows a bsub command and Makefile for a simple parallel job.

% bsub -n 4 lsmake -f Parjob.makefile
Job <3858> is submitted to default queue <normal>.
% cat Parjob.makefile
# Makefile to run example parallel job using lsbatch and lsmake

all:    part1 part2 part3 part4

part1 part2 part3:
        myjob data.$@

        myjob2 data.part1 data.part2 data.part3

The batch job has four components. The first three components run the myjob command on the data.part1, data.part2 and data.part3 files. The fourth component runs the myjob2 command on all three data files. There are no dependencies between the components, so lsmake runs them in parallel.

Submitting PVM Jobs to LSF Batch

PVM is a parallel programming system distributed by Oak Ridge National Laboratories. PVM programs are controlled by a file, the PVM hosts file, that contains host names and other information. The pvmjob shell script supplied with LSF can be used to run PVM programs as parallel LSF Batch jobs. The pvmjob script reads the LSF Batch environment variables, sets up the PVM hosts file and then runs the PVM job. If your PVM job needs special options in the hosts file, you can modify the pvmjob script.

For example, if the command line to run your PVM job is

% myjob data1 -o out1

the following command submits this job to LSF Batch to run on 10 hosts:

% bsub -n 10 pvmjob myjob data1 -o out1

Other parallel programming packages can be supported in the same way. The p4job shell script runs jobs that use the P4 parallel programming library. Other packages can be handled by creating similar scripts.

Submitting MPI Jobs to LSF Batch

The Message Passing Interface (MPI) is a portable library that supports parallel programming. LSF supports MPICH, a joint implementation of MPI by Argonne National Laboratory and Mississippi State University. This version supports both TCP/IP and IBM's Message Passing Library (MPL) communication protocols.

LSF provides an mpijob shell script that you can use to submit MPI jobs to LSF Batch. The mpijob script writes the hosts allocated to the job by the LSF Batch system to a file and supplies the file as an option to MPICH's mpirun command. The syntax of the mpijob command is

mpijob option mpirun program [arguments]

where option is one of the following:

Write the LSF Batch hosts to a PROCGROUP file, supply the -p4pg procgroup_file option to the mpirun command, and use the TCP/IP protocol. This is the default.
Write the LSF Batch hosts to a MACHINE file, supply the -machinefile machine_file option to the mpirun command, and use the MPL on an SP-2 system.

The following examples show how to use mpijob to submit MPI jobs to LSF Batch.

To submit a job requesting four hosts and using the default TCP/IP protocol, use

% bsub -n 4 mpijob mpirun myjob

Before you can submit a job to a particular pool of IBM SP-2 nodes, an LSF administrator must install the SP-2 ELIM. The SP-2 ELIM provides the pool number and lock status of each node.

To submit the same job to run on four nodes in pool 1 on an IBM SP-2 system using MPL, use

% bsub -n 4 -R "pool == 1" mpijob -mpl mpirun myjob

To submit the same job to run on four nodes in pool 1 that are not locked (dedicated to using the High Performance Switch) on an SP-2 system using MPL, use

% bsub -n 4 -q mpiq -R "pool == 1 && lock == 0" mpijob -mpl mpirun myjob

Before you can submit a job using the IBM SP-2 High Performance Switch in dedicated mode, an LSF administrator must set up a queue for automatic requeue on job failure. The job queue will automatically requeue a job that failed because an SP-2 node was locked after LSF Batch selected the node but before the job was dispatched.

Submitting POE Jobs to LSF Batch

The Parallel Operating Environment (POE) is an execution environment provided by IBM on SP-2 systems to hide the differences between serial and parallel execution.

LSF provides a poejob shell script that you can use to submit POE jobs to LSF Batch. The poejob script translates the hosts allocated to the job by the LSF Batch system into an appropriate POE host list and sets up environment variables necessary to run the job.

The poejob script does not set the MP_EUILIB and MP_EUIDEVICE environment variables, so you have to do this.

% setenv MP_EUILIB us

By default, MP_EUIDEVICE is css0. Or,

% setenv MP_EUILIB ip
% setenv MP_EUIDEVICE en0

The following are examples of how to submit POE jobs.

To submit a job requesting four SP-2 nodes configured for the poeq queue, use

% bsub -n 4 -q poeq poejob myjob

By using LSF resource requirements, you can select appropriate nodes for your job.

To submit the same job requesting four SP-2 nodes from pool 2 configured for the poeq queue, use

% bsub -n 4 -R "pool == 2" -q poeq poejob myjob

To submit the same job requesting four SP-2 nodes from pool 2 with at least 20 megabytes of swap space, use

% bsub -n 4 -R "(pool == 2) && (swap > 20)" -q poeq poejob myjob

To submit the same job requesting four SP-2 nodes from pool 2 that are not locked (dedicated to using the High Performance Switch), use

% bsub -n 4 -R "(pool == 2) && (lock == 0)" -q poeq poejob myjob

Using a Job Starter for Parallel Jobs

The above examples use a script to run parallel jobs under LSF Batch. Alternatively, your LSF administrator could configure the script into your queue as a job starter. With a job starter configured at the queue, you can submit the above parallel jobs without having to type the script name. See 'Using A Job Starter' in the LSF Administrator's Guide for more information about a job starter.

To see if your queue already has a job starter defined, run the bqueues -l command.

[Contents] [Prev] [Next] [End]


Copyright © 1994-1997 Platform Computing Corporation.
All rights reserved.