NEW FEATURES
- pegasus lite local wrapper is now used for local universe jobs in shared fs mode also, if condor io is detected.
Also remote_initialdir is not implemented consistently across universes in Condor. For vanilla universe condor file io does not transfer the file to the remote_initialdir.
- task summary queries were reimplemented
The task summary queries ( that list the number of successful and failed tasks ) in the Stampede Statisitcs API was reimplemented for better performance.
- pegasus-monitord sets PEGASUS_BIN_DIR while calling out notfication scripts .
PM-598 #716
- the default notification script can send out emails to multiple recipients.
- Support for new condor keys
Pegasus allows users to specify the following condor keys as profiles in the Condor namespace. The new keys have been introduced in Condor 7.8.0
request_cpus request_memory request_disk
PM-600 #718
BUGS FIXED
- pegasus-kickstart does not collect procs and tasks statistics on kernels >= 3.0
When kickstart is executed on a Linux kernel >= 3.0, logic in the machine extensions prevented the proc statistics gathering, because it was a reasonable assumption that the API might have changed (it did between 2.4 and 2.6). This is now fixed, as it is supported for kernels 3.0 through 3.2
PM-571 #689
- scp transfer mode did not create remote directories
When transferring to a scp endpoint, pegasus-transfer failed unless the remote directory already existed. This broke deep LFNs and staging to output sites. This is now fixed.
PM-579 #697
- Incorrect resolution of PEGASUS_HOME path in the site catalog for remote sites in some cases
If a user specified a path to PEGASUS_HOME for remote sites in the site catalog and the directory also existed on the submit machine, the path was resolved locally. Hence if the local directory was a symlink, the symlink was resolved and that path was used for the remote site’s PEGASUS_HOME.
PM-577 #695
- pegasus-analyzer did not work correctly against the MySQL Stampede DB
pegasus-analyzer had problems querying MySQL stampede database because of a query aliasing error in the API underneath. This is now fixed.
PM-580 #698
- Wrong timezone offsets for ISO timestamps
Pegasus python library was generating the wrong time zone offset for ISO 8601 time stamps. This was because of an underlying bug in python where %z does not work correctly across all platforms.
PM-576 #694
- pegasus-analyzer warns about “exitcode not an integer!”
pegasus-analyzer throwed a warning if a long value for an exitcode was detected.
PM-584 #702
- Perl DAX generator uses ‘out’ instead of ‘output’ for stderr and stdout linkage
The perl DAX generator API generated the wrong link attribute for stdout files. Instead of having link = output it generated link = out.
PM-585 #703
- Updated Stampede Queries to handle both GRID_SUBMIT and GLOBUS_SUBMIT events.
Two of the queries ( get_job_statistics and get_job_state ) were broken for CondorG workflows when operating against a MySQL database backend. In that case, both GRID_SUBMIT and GLOBUS_SUBMIT can be logged for the jobs. In that case, some of the subqueries were breaking against MySQL has MySQL has stricter checks on queries returning a single value.
- Support for DAGMAN_COPY_TO_SPOOL Condor configuration parameter
Condor has a setting DAGMAN_COPY_TO_SPOOL that if set to true, results in Condor copying the DAGMan binary to the spool directory before launching the workflow. In case of Pegasus, condor dagman is launched by a wrapper called pegasus-dagman. Because of this , pegasus dagman was copied to the condor spool directory before being launched in lieu of condor dagman binary.
This is now fixed whereby pegasus-dagman will copy condor_dagman binary to the submit directory for the workflow before launching the workflow.
More details at PM-595 #713