Advance Preparation

  1. Before using DiCOS, you should have the following things :

  2. SSH log in DiCOS web UI with your DiCOS account:

    • or




  • Set up

    1. Initialize the voms proxy

      voms-proxy-init -voms <VO>

      $ voms-proxy-init -voms twgrid
      • note:
        • <VO> is one the Virtual Organizations you are joining, for example : twgrid, atlas,
  • Submit job

    Submit job to DiCOS.

    dsub --computingSite <Computing Site> --jobCommands <"Commands"> --runList <Input File List> [--noDM] --taskname <Task Name> --nfiles <Number> --userlib <User Libraries> --transformation

    $ dsub --computingSite ANALY_TAIWAN_TWGRID_PHYS_POSIX --jobCommands " 64"  --runList myinputs.list --noDM --taskname Asize64 --nfiles 2 --userlib --transformation
    • --computingSite
      • Use to specify computing sites.
      • Each VO has its corresponding computing sites and each sits contains different resources.
      • For twgrid user or IOP users. please always use sites with prefix ANALY_TAIWAN_TWGRID_*
      • Reference : Available Computing Sites
    • --jobCommands
      • <"Commands"> is the command that runs your applications or scripts.
      • Just like that your run on your PC.
      • It can be any UNIX commands, shell scripts or other executable applications.
      • For example :
        --jobCommands "sh arg1 arg2"
        --jobCommands "echo hostname; cat myInput.txt"
        --jobCommands "python"
    • --runList
      • <Input File List> is a local text file to specify input list.
      • Each line will be treated as one input file.
      • There are three kinds of formats :
        1. Using our Distributed Data Management (DDM), the format will be : [scope]:[filename]
          • You have to upload the input files to our DDM first, via web UI or command-line.
          • The input files will be downloaded from DDM to the working directory by pilot automatically.
          • For example :
    • --noDM
      • It's important if you don't use inputs from our Distributed Data Management (DDM) system, use this option, the pilot will not go to data management system to get your inputs.
    • --taskname
      • Use to specify task name, it will be also used as the output dataset name.
    • --nfiles
      • Use to specify how many inputs files per job.
      • dusb will divide the files in the <Input File List> by the sequence.
      • For example, there 50 files listed in the <Input File List> and --nfiles 5. It means that dsub will submit 10 jobs, each with 5 inputs.
    • --userlib
      • Use to specify user libraries.
      • These user libraries must be registered in user scope in our distributed data management system(RUCIO).
      • If the libraries are tarball, pilot will untar for you in the working directory.
      • You could use "," to separate the multiple files.
      • Just give it the file name, for example :
        --userlib ISS_Proton_Dec12_V1.tar.gz,
    • --transformation
    • --help
      • Get more information about dsub with command dsub --help
        $ dsub --help
  • Get job status

    Get the job status and other related information. <panda ID 1> <panda ID 2> ... <panda ID N>

    $ 1473799114 1473799115
    • note:
      • You could also go to web UI to monitor the jobs.
  • Cancel job

    Cancel the jobs in status "running" or "activated". <panda ID 1> <panda ID 2> ... <panda ID N>

    $ 1473799114 1473799115
    • note:
      • "activated" jobs will be cancelled entirely.
      • It's not guaranteed that all the "running" jobs will be killed. Some of them will still run until it finish themselves.
  • Rerun job

    In case job failed, rerun jobs without changing anything. <panda ID 1> <panda ID 2> ... <panda ID N>

    $ 1473799114 1473799115
    • note:
      • will submit jobs again with all the same arguments that you submitted first time, and return the new panda ID.


  • Preparation

    1. Assuming you have application and job script like:

      • myprog.exe

        The myprog.exe needs following arguments

        ./myprog.exe [array size] [input file 1] [input file 2] ...[input file(N)]


        Here is

        # The "noDmFile0.txt" comes with --noDM, in this case, we don't use DDM system to get inputs
        ./myprog.exe $1 $( cat noDmFile0.txt | awk '{printf $0" "}' )
    2. Then, you need to make them into tarball, let's say "myuserlib.tar.gz".

      tar -czf myuserlib.tar.gz myprog.exe
    3. Upload myuserlib.tar.gz to our data management system. please refer : web UI or command-line

    4. Once your myuserlib.tar.gz is available at our distributed DM system, you can then pick up your inputs either from our distributed DM system or anywhere as long as the cluster worker node can have access to it.

    5. Here we assume input files are stored at local NFS, so, we create a list called "myinputs.list" with following contents:

  • Execute

    1. Initialize the voms proxy, assuming you have twgrid VO.

      $ voms-proxy-init -voms twgrid
    2. Submit job

      • We want to process two inputs for each jobs, and we want to run a small case with array size 64, and the run jobs at ANALY_TAIWAN_TWGRID_PHYS.
      • Because the inputs are not from the DDM, use --noDM
      • The following command will do:
        $ dsub --computingSite ANALY_TAIWAN_TWGRID_PHYS --jobCommands " 64"  --runList myinputs.list --noDM --taskname Asize64 --nfiles 2 --userlib --transformation
    3. If the job is submitted successfully, it will return panda ID like this:

      PandaID: 1473799114
      PandaID: 1473799115
  • Result

    1. Use to check job status or visit web UI to monitor.

      $ 1473799114 1473799115
    2. In case job failed, use to rerun jobs.

      $ 1473799114 1473799115
    3. In case job finished, download the output to check.

      • The taskname "Asize64" will also be the name of output dataset, download this dataset to get the outputs.

This article was last modified 5 years, 5 months ago.