# Sample script to run a PBS job (referrer: pbs.html) # This script shows several issues that need to be addressed when your # job is more complicated. You can use the "save as" feature of your # browser to make a copy of this file, and then edit it as you need # (removing most of the comments). # Specify the shell for PBS (you could also use /bin/csh with different # syntax). #PBS -S /bin/sh # Specify the queue explicitly. The default isn't working yet. #PBS -q sixpac@bud # Specify the CPU time limit in seconds (required) #PBS -l cput=3600 # It's a good idea to announce when and on which machine your job is # running. echo Job started on `uname -n` at `date` # This script is going to work with two directories. We'll make # symbolic names for them. # This is the name of the directory containing permanent data and # programs. (Fill in the actual name here. Use $HOME for your Mathnet # home directory if that's where the permanent files are.) h=$HOME/myproject # This directory will contain large files to which high speed I/O is # done. You want to put it on the local machine. Local I/O is much # faster than I/O via NFS to another machine. Use $USER for your # loginID to keep your files separate from other users. You may make # subdirectories if convenient. s=/scr/$USER/flow_regime # If some accident caused two jobs to run at the same time on the same # machine, or if the previous job crashed and the results have not been # saved properly, you want to exit. The script creates this lock file. # If it's still there, something went wrong. if [ -f $s/itsrunning ] ; then echo Exiting because a previous job is using the directory cat $s/itsrunning exit 4 fi # Since we have to rebuild the scratch directory anyway, let's just # remove the whole thing first. No problem if the directory never # existed before. See the end of the script for suggestions how to # save important result files. rm -rf $s # Make the scratch directory (and containing directories), change to it, # and populate it with needed files, especially the lock file. mkdir -p $s cd $s echo The job is running, `date` > $s/itsrunning # If the program needs to read a short file with a fixed name for its # input parameters, an easy way to set that up is with a symbolic link # to a pre-existing file in the permanent directory. You could also # just copy the file. The most elegant solution is for the program to # read its parameters from standard input, but let's assume it's not # elegant. ln -s $h/flow_regime_3 paramfile # Here's an alternative way that keeps the parameters in this script. cat - < miscparams xstart 1.000 xend 1.650 xdelta 5.0e-3 EOF # Now the program runs. PBS captures standard out and standard error. # The program can rapidly write its large scratch files in the current # directory, which is on the local machine. It's generally reasonable # to have the program file in the permanent directory. If you run the # same project on different architectures (i386-SunOS, sparc-SunOS, # i386-Linux), you need to have separate executable files for each, with # distinguishing names. $h/program.i386-SunOS # Different programs produce output in different ways and this section will # have to be adapted to your particular needs. Here's how to append a short # summary file to a logfile in the permanent directory. cat summaryout >> $h/logfile # Files in your permanent directory have to have different names. Here's a # way to differentiate them using the hostname, date and time. Remember to # not exceed your disc quota. mv flowmap $h/flowmap.`uname -n`.`date +%Y-%m-%d.%H:%M:%S` # Be nice, get rid of your giant files that aren't needed any more. (Fill in # the names of the scratch files that the program writes.) rm -f state_cache flow_phase_A flow_phase_B # All done, remove the lock file allowing this directory to be used by the # next instance of your job. rm itsrunning