8.6. Container Example - Montage Workflow

8.6.1. Montage Using Containers

This section contains an example of a real workflow running inside Singularity containers. The application is Montage using the montage-v2 workflow. Be aware that this workflow can be fairly data intensive, and when running with containers in condorio or nonsharedfs data management modes, the data staging of the application data and the container image to each job can result in a non-trivial amount of network traffic.

The software dependencies consists of the Montage software stack, and AstroPy. These are installed into the image (see the Singularity file in the GitHub repository). The image has been made available in Singularity Hub.

Now that we have an image, the next step is to check out the workflow from GitHub, and use it to create an abstract workflow description, a transformation catalog and a replica catalog. The montage-workflow.py command create all this for us, but the command itself requires Montage to look up input data for the specified location in the sky. The provide the environment, run this command inside the same Singularity image. For example:

singularity exec \
            --bind $PWD:/srv --workdir /srv \
            shub://singularity-hub.org/pegasus-isi/montage-workflow-v2 \
            /srv/montage-workflow.py \
                --tc-target container \
                --center "56.7 24.00" \
                --degrees 2.0 \
                --band dss:DSS2B:blue \
                --band dss:DSS2R:green \
                --band dss:DSS2IR:red

The command executes a data find for the 3 specified bands, 2.0 degrees around the location 56.7 24.00, and generates a workflow to combine the images into a single image. One extra flag is provided to let the command know we want to execute the workflow inside containers: --tc-target container. The result is a transformation catalog in data/tc.txt

, with starts with:
cont montage {
   type "singularity"
   image "shub://singularity-hub.org/pegasus-isi/montage-workflow-v2"
   profile env "MONTAGE_HOME" "/opt/Montage"

tr mDiffFit {
  site condor_pool {
    type "INSTALLED"
    container "montage"
    pfn "file:///opt/Montage/bin/mDiffFit"
    profile pegasus "clusters.size" "5"

The first entry describes the container, where the image can be found (Singularity Hub in this example), and a special environment variable we want to be set for the jobs.

The second entry, of which there are many more similar ones in the file, describes the application. Note how it refers back to the "montage" container, specifying that we want the job to be wrapped in the container.

In the data/ directory. we can also find the abstract workflow (montage.dax), and replica catalog (rc.dax). Note that this are the same as if the workflow was running in a non-container environment. To plan the workflow:

pegasus-plan \
        --dir work \
        --relative-dir `date +'%s'` \
        --dax data/montage.dax \
        --sites condor_pool \
        --output-site local \
        --cluster horizontal