Job Submission File Examples

  • minimal submission script that runs test.sh script
executable = test.sh
queue
  • keep job stdout/stderr and logging data
executable = test.sh
#input = test.in
output = test.out.$(ClusterId).$(Process
error = test.err.$(ClusterId).$(Process)
log = test.log.$(ClusterId)
queue 10
  • pass job arguments to executed script
executable = test.sh
arguments = $(ClusterId)
getenv = True
queue
  • propagate current environment variables and add new env variables
executable = test.sh
getenv = True
environment = "NAME1=VAR1"
environment = "NAME2=VAR2"
queue
  • be aware that user HOME directory is not set for jobs by default (this is different behavior with respect to PBS batch), use getenv or just set environment
executable = test.sh
environment = "HOME=$ENV(HOME)"
queue

Without HOME environment variable some commands can fail, e.g.

cd
bash: cd: HOME not set

but you can still use

cd ~
  • transfer job input/output using condor file transfer mechanismWe do not recommend using this method. Please simply use for input/output files the command „cp“ in your execution file.
executable = test.sh
should_transfer_files = YES
when_to_transfer_output = ON_EXIT_OR_EVICT
transfer_input_files = file1.in,file2.in
transfer_output_files = file1.out,file2.out
queue
  • don’t transfer executable/script, use existing file accessible from worker node
executable = /bin/cat
arguments = /etc/redhat-release
transfer_executable = False
queue
  • use specific accounting group for submitted batch jobs and specify user job priority ( [-20, 20] )
executable = test.sh
accounting_group = group_auger.user
accounting_group_user = auger
priority = 0
queue
  • common resource requirements (NOTE: currently batch configuration doesn’t support jobs that requires more than 8 cores – please contact site administartors if that’s not enought for your jobs)
executable = test.sh
request_cpus = 4
request_memory = 6GB
request_disk = 100GB
queue
  • specify walltime in seconds (default: 1200)
executable = test.sh
+MaxRuntime = 60*60
queue

It is possible to change this value later, but updates are applyed only on idle jobs. e.g.

condor_qedit jobid MaxRuntime=86400
  • submit job that needs grid/voms certificate proxy
executable = test.sh
use_x509userproxy = True
# %N% unfortunately htcondor needs to be forced to transfer files,
# because X509 proxy use default /tmp which is not on shared filesystem
# (be aware that all transfer_input_files and transfer_output_files will go
# via condor.farm.particle.cz submission node => don't specify big files)
should_transfer_files = YES
queue

Submit job

voms-proxy-init -voms atlas
condor_submit -spool job.submit

Download finished files

condor_transfer_data
  • submit job that needs grid/voms certificate proxy using MyProxy server
executable = test.sh
#use_x509userproxy = True
#MyProxyHost = myproxy.cern.ch
#MyProxyPassword = secret
queue

Submit job

myproxy-init -s myproxy.cern.ch -l username -m atlas
condor_submit job.submit

In job script get proxy from MyProxy server

echo secret | myproxy-logon -s myproxy.cern.ch -l username -S
  • run batch jobs only on specific WNs
executable = test.sh
requirements = regexp("^db[0-9]+\.farm\.particle\.cz$", Machine, "i")
#requirements = stringListMember(Machine, "db1.farm.particle.cz,db2.farm.particle.cz")
#requirements = (Machine == "db1.farm.particle.cz")||(Machine == "db2.farm.particle.cz")
queue
  • use only WN with specific CPU features
executable = test.sh
requirements = has_avx2
queue

Use condor_status to see all available options, e.g.

condor_status -startd -long -constraint 'SlotType =!= "Dynamic"' mikan01.farm.particle.cz
condor_status -startd -long -constraint 'SlotType =!= "Dynamic"' mikan01.farm.particle.cz | grep -i ^has
  • use different directories for submitted jobs
executable = test.sh
initialdir = job1
queue
initialdir = job2
queue
initialdir = jobn/$(Process)
queue 5
  • submit jobs for each data file in current directory
executable = test.sh
arguments = $(infile)
queue infile matching *.dat
#queue infile in (wi.dat ca.dat ia.dat)
#queue infile from state_list.txt
  • use singularity containers to provide independent OS environment (e.g. if you want to use legacy Centos7 or SLC6 FZU environment)
executable = test.sh
+SingularityImage = "/cvmfs/farm.particle.cz/singularity/fzu_wn-centos7"
queue

list of singularity images that can be useful for HEP users

  • legacy FZU Centos7 or SLC6 installation
    • /cvmfs/farm.particle.cz/singularity/fzu_wn-centos7
    • /cvmfs/farm.particle.cz/singularity/fzu_wn-slc6
  • CERN minimal OS images
    • /cvmfs/unpacked.cern.ch/registry.hub.docker.com/library/centos:centos6
    • /cvmfs/unpacked.cern.ch/registry.hub.docker.com/library/centos:centos7
    • /cvmfs/unpacked.cern.ch/registry.hub.docker.com/library/debian:stable
    • /cvmfs/unpacked.cern.ch/registry.hub.docker.com/library/fedora:latest
  • ATLAS images
    • /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-slc5
    • /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos6
    • /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
  • OSG images
    • /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osg-3.3-wn-el6:latest
    • /cvmfs/singularity.opensciencegrid.org/opensciencegrid/osg-3.3-wn-el7:latest
Přejít nahoru