This is a minor release, that fixes some bugs and has minor enhancements.
Improvements:
-
pegasus lite local wrapper is now used for local universe jobs in shared fs mode also, if condor io is detected.Condor does not implement remote_initialdir consistently across universes in Condor. For vanilla universe Condor File IO does not transfer the file to the remote_initialdir. The pegasus wrapper now takes care of it.
- task summary queries were reimplemented
The task summary queries ( that list the number of successful and failed tasks ) in the Stampede Statisitcs API wasreimplemented for better performance.
-
pegasus-monitord sets PEGASUS_BIN_DIR while calling out notfication scripts .This ensures consistent environment for the notification scripts irrespective of how the workflow is submitted.
-
The default notification script can send out emails to multiple recipients.
-
Support for new condor keysPegasus allows users to specify the following condor keys as profiles in the Condor namespace. The new keys have been introduced in Condor 7.8.0request_cpusrequest_memoryrequest_disk
-
Update on SQLAlchemy and pysqlite bundled with PegasusThe SQLAlchemy version bundled with Pegasus is now 0.7.6 .pysqlite is only built with Pegasus for RHEL 5 . For all the platforms Pegasus relies on sqlite version bundled with python installation.
Bugs Fixed:
-
pegasus-kickstart does not collect procs and tasks statistics on kernels >= 3.0When kickstart is executed on a Linux kernel >= 3.0, logic in the machine extensions prevented the proc statistics gathering, because it was a reasonable assumption that the API might have changed . The API did not change from 3.0 to 3.2. Hence the proc statistics gathering has been enabled.
-
scp transfer mode did not create remote directoriesWhen transferring to a scp endpoint, pegasus-transfer failed unless the remote directory already existed. This broke deep LFNs and staging to output sites. This is now fixed.
-
Incorrect resolution of PEGASUS_HOME path in the site catalog for remote sites in some casesIf a user specified a path to PEGASUS_HOME for remote sites in the site catalog and the directory also existed on the submit machine, the path was resolved locally. Hence if the local directory was a symlink, the symlink was resolved and that path was used for the remote site’s PEGASUS_HOME.
-
pegasus-analyzer did not work correctly against the MySQL Stampede DBpegasus-analyzer had problems querying MySQL stampede database because of a query aliasing error in the API underneath. This is now fixed.
-
Wrong timezone offsets for ISO timestampsPegasus python library was generating the wrong time zone offset for ISO 8601 time stamps. This was because of an underlying bug in python where %z does not work correctly across all platforms.
-
pegasus-analyzer warns about “exitcode not an integer!”pegasus-analyzer throwed a warning if a long value for an exitcode was detected.
-
Perl DAX generator uses ‘out’ instead of ‘output’ for stderr and stdout linkageThe perl DAX generator API generated the wrong link attribute for stdout files. Instead of having link = output it generated link = out.
-
Updated Stampede Queries to handle both GRID_SUBMIT and GLOBUS_SUBMIT events.Two of the queries ( get_job_statistics and get_job_state ) were broken for CondorG workflows when operating against a MySQL database backend. In that case, both GRID_SUBMIT and GLOBUS_SUBMIT can be logged for the jobs. In that case, some of the subqueries were breaking against MySQL has MySQL has stricter checks on queries returning a single value.
-
Support for DAGMAN_COPY_TO_SPOOL Condor configuration parameterCondor has a setting DAGMAN_COPY_TO_SPOOL that if set to true results in Condor copying the DAGMan binary to the spool directory before launching the workflow. In case of Pegasus, condor dagman is launched by a wrapper called pegasus-dagman. Because of this , pegasus dagman was copied to the condor spool directory before being launched in lieu of condor dagman binary.This is now fixed whereby pegasus-dagman will copy condor_dagman binary to the submit directory for the workflow before launching the workflow.More details at