Pegasus 3.0.3 Released

with No Comments
Release Notes for PEGASUS 3.0.3
This is a minor release, that fixes some bugs and has minor


1) job statistics file has a num field now

   The job statistics file has statistics about Condor jobs in the 
   workflow across retries. There is now a new field called num added 
   that indicates the number of times JOB_TERMINATED event is seen for
   a Condor Job.

2) improvements to pegasus-monitord

   When using MySQL, users no longer are required to create the database
   using the 'latin1' character encoding. Now, pegasus-monitord will 
   automatically create all tables using the 'latin1' encoding.

   When using MySQL, the database engine used by Pegasus-monitord is set
   to 'InnoDB'. This prevents certain database errors and allows for 
   improved performance.

3) inherited-rc-files option for pegasus-plan

   pegasus-plan has a new option --inherited-rc-files, that is used in
   hierarichal workflows to pass the file locations in the parent
   workflow's DAX to planner instances working on a subdax job. Locations
   passed via this option, have a lower priority than the locations of
   files mentioned in the DAX.

4) Turning off priority assignments to condor jobs
   By default, Pegasus add a condor priority to a job based on the level of
   the workflow the job is at. The Boolean property pegasus.job.priority.assign 
   can be used to turn off the default assignment of priorities.

1) Fixed a bug in the code handling SIGUSR1 and SIGUSR2 that caused 
   pegasus-monitord to abort due to an out-of-bounds condition.

2) Fixed Python 2.4 compatibility issue that caused pegasus-monitord to
   abort when receiving a SIGUSR1 or SIGUSR2 to change its debugging level.

3) pegasus-transfer failed on scp if the destination URL was a file URL
   This is now fixed. More details at

4) pegasus transfer failed on scp if the destination host was not in users
   know_hosts. This is now fixed. More details at

5)  pegasus-plan had a potential stack overflow issue that could occur 
    while calling out to transformation selectors that return more than 
    one entry. 

6)  Destination file url's were not correctly replaced with symlink protocol
    scheme in the case where the destination site had a file server 
    ( url prefix file:/// ). 

Release Notes for PEGASUS 3.0.2
This is a minor release, that fixes some bugs and has minor


1) New Pegasus Properties for pegasus-monitord daemon

   The pegasus-monitord daemon is launched by pegasus-run while
   submitting the workflow, and by default parses the condor logs for
   the workflows and populates them in a sqllite DB in the workflow
   submit directory. - A Boolean Property indicating whether to
   parse and generate log events or not.

   pegasus.monitord.output - This property can be used to specify the
   destination for generated log events in pegasus-monitord

2) Improvements to pegasus-monitord 

   pegasus-monitord now does batches evennts before popualating them
   in to the stampede backend.

3) New entries in braindump file
   The braindump file generated in the submit directory has two new

   The braindump file has two extra entries now

   properties - path to the properties file
   condor_log -  path to the condor log for the workflow

4) pegasus-transfer supports ftp transfers in unauthenticated mode.

1) Failure of rescue dags if submit directory on NFS.
   Pegasus creates a symlink in the submit directory to the condor log
   file for the workflow in /tmp . In case the workflow failed and the
   submit directory was on NFS, pegasus-run on rescue would take a
   backup of the symlink file in the submit directory. This resulted
   in the workflow failing on resubmission, as the condor log now
   pointed to a file in the submit directory that was on NFS.

   pegasus-submit-dag was fixed to copy the symlinked log while
   rotating instead of copying just the symlink.