A generic grid environment shown in the figure above. We will work from the left to the right top, then the right bottom.
On the left side, you have a submit machine where Pegasus runs, HTCondor schedules jobs, and workflows are executed. We call it the submit host (SH), though its functionality can be assumed by a virtual machine image. In order to properly communicate over secured channels, it is important that the submit machine has a proper notion of time, i.e. runs an NTP daemon to keep accurate time. To be able to connect to remote clusters and receive connections from the remote clusters, the submit host has a public IP address to facilitate this communication.
In order to send a job request to the remote cluster, HTCondor wraps the job into Globus calls via HTCondor-G. Globus uses GRAM to manage jobs on remote sites. In terms of a software stack, Pegasus wraps the job into HTCondor. HTCondor wraps the job into Globus. Globus transports the job to the remote site, and unwraps the Globus component, sending it to the remote site's resource manager (RM).
To be able to communicate using the Globus security infrastructure (GSI), the submit machine needs to have the certificate authority (CA) certificates configured, requires a host certificate in certain circumstances, and the user a user certificate that is enabled on the remote site. On the remote end, the remote gatekeeper node requires a host certificate, all signing CA certificate chains and policy files, and a goot time source.
In a grid environment, there are one or more clusters accessible via grid middleware like the Globus Toolkit. In case of Globus, there is the Globus gatekeeper listening on TCP port 2119 of the remote cluster. The port is opened to a single machine called head node (HN).The head-node is typically located in a de-militarized zone (DMZ) of the firewall setup, as it requires limited outside connectivity and a public IP address so that it can be contacted. Additionally, once the gatekeeper accepted a job, it passes it on to a jobmanager. Often, these jobmanagers require a limited port range, in the example TCP ports 40000-41000, to call back to the submit machine.
For the user to be able to run jobs on the remote site, the user must have some form of an account on the remtoe site. The user's grid identity is passed from the submit host. An entity called grid mapfile on the gatekeeper maps the user's grid identity into a remote account. While most sites do not permit account sharing, it is possible to map multiple user certificates to the same account.
The gatekeeper is the interface through which jobs are submitted to the remote cluster's resource manager. A resource manager is a scheduling system like PBS, Maui, LSF, FBSNG or HTCondor that queues tasks and allocates worker nodes. The worker nodes (WN) in the remote cluster might not have outside connectivity and often use all private IP addresses. The Globus toolkit requires a shared filesystem to properly stage files between the head node and worker nodes.
The shared filesystem requirement is imposed by Globus. Pegasus is capable of supporting advanced site layouts that do not require a shared filesystem. Please contact us for details, should you require such a setup.
To stage data between external sites for the job, it is recommended to enable a GridFTP server. If a shared networked filesystem is involved, the GridFTP server should be located as close to the file-server as possible. The GridFTP server requires TCP port 2811 for the control channel, and a limited port range for data channels, here as an example the TPC ports from 40000 to 41000. The GridFTP server requires a host certificate, the signing CA chain and policy files, a stable time source, and a gridmap file that maps between a user's grid identify and the user's account on the remote site.
The GridFTP server is often installed on the head node, the same as the gatekeeper, so that they can share the grid mapfile, CA certificate chains and other setups. However, for performance purposes it is recommended that the GridFTP server has its own machine.
An example site catalog entry for a GRAM enabled site looks as follow in the site catalog
<sitecatalog xmlns="http://pegasus.isi.edu/schema/sitecatalog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://pegasus.isi.edu/schema/sitecatalog http://pegasus.isi.edu/schema/sc-4.0.xsd" version="4.0"> <site handle="Trestles" arch="x86_64" os="LINUX"> <grid type="gt5" contact="trestles.sdsc.edu/jobmanager-fork" scheduler="Fork" jobtype="auxillary"/> <grid type="gt5" contact="trestles.sdsc.edu/jobmanager-pbs" scheduler="unknown" jobtype="compute"/> <directory type="shared-scratch" path="/oasis/projects/nsf/USERNAME"> <file-server operation="all" url="gsiftp://trestles-dm1.sdsc.edu/oasis/projects/nsf/USERNAME"/> </directory> <!-- specify the path to a PEGASUS WORKER INSTALL on the site --> <profile namespace="env" key="PEGASUS_HOME" >/path/to/PEGASUS/INSTALL</profile> </site> </sitecatalog>