Pegasus 5.0.1 Released

with No Comments

We are happy to announce the release of Pegasus 5.0.1, which is a minor bug fix release for Pegasus 5.0.

We invite our users to give it a try.

Pegasus 5.0.1 Release Image

The release features improvements to the Pegasus Python API including ability to visualize statically the abstract and generated executable workflows. It also has improved support for DECAF, including an ability to get clustered jobs in a workflow executed using DECAF. This release features improvements to data access in PegasusLite jobs, if data resides on local site, and job runs on a site where “auxiliary.local” profile is set to true. Users can now use a new Submit Mapper called “Named” that allows you to specify what sub directory a job’s submit files are placed in. Release also features updated support for submission of jobs using HubZero Distribute to HPC Clusters and new pegasus.mode called “debug” to enable verbose logging throughout the Pegasus stack.

The release can be downloaded from:
https://pegasus.isi.edu/downloads

JIRA items

Exhaustive list of features, improvements and bug fixes can be found below.

New Features

  • [PM-1726] – Update support for HubZero Distribute
  • [PM-1751] – Named Submit Directory Mapper
  • [PM-1798] – instead of the workflow having explicit data flow jobs, get pegasus to automatically cluster jobs to a decaf representation
  • [PM-1753] – add Workflow.get_status()
  • [PM-1767] – remove the default arguments, output_sites and cleanup in SubWorkflow.add_planner_args()
  • [PM-1786] – update usage of threading.Thread.isAlive() to be is_alive() in python scripts
  • [PM-1788] – Add configuration documentation for hierarchical workflows
  • [PM-1429] – Introduce PEGASUS_ENV variable to define mode of workflow i.e. development, production, etc
  • [PM-1651] – Add more profile keys in the add_pegasus_profile
  • [PM-1672] – override add_args for SubWorkflow so that args refer to planner args
  • [PM-1706] – sphinx has hardcoded versios
  • [PM-1730] – 5.0.1 Python Api improvements
  • [PM-1733] – expand on checkpointing documentation
  • [PM-1739] – expose panda job submissions similar to how we support BOSCO
  • [PM-1742] – allow a tc to be empty without the planner failing
  • [PM-1743] – allow catalogs to be embedded into workflow when workflow contains sub workflows
  • [PM-1747] – 031-montage-condor-io-jdbcrc failing
  • [PM-1768] – replace GRAM workflow tests with bosco
  • [PM-1769] – update tests since /nfs/ccg3 is gone now
  • [PM-1771] – pegasus-db-admin upgrade
  • [PM-1780] – Refactor Transfer Engine Code
  • [PM-1787] – auxiliary.local is not considered when triggering symlink in PegasusLite in nonsharedfs mode
  • [PM-1792] – decaf jobs over bosco
  • [PM-1794] – put in support for additional keys required by decaf
  • [PM-1796] – passing properties to be set for sub workflow jobs
  • [PM-1800] – enable inplace cleanup for hierarchical workflows
  • [PM-1802] – Add support for Debian 11
  • [PM-1803] – use force option when doing a docker rm of the container image
  • [PM-1810] – Extend debug capabilities for pegasus.mode
  • [PM-1811] – add pegasus-keg to worker package
  • [PM-1818] – new pegasus.mode debug
  • [PM-1723] – add_<namespace>_profile() should be plural
  • [PM-1731] – functions that take in File objects as input parameters should also accept strings for convenience
  • [PM-1744] – progress bar from wf.wait() should include “UNRDY” as shown in status output
  • [PM-1755] – catalog write location should be stored upon call to catalog.write()
  • [PM-1757] – add pegasus profile relative.submit.dir
  • [PM-1784] – Refactor Stagein Generator code out of Transfer Engine
  • [PM-1790] – extend site catalog schema to indicate shared file system access for a directory
  • [PM-1791] – update planner to parse sharedFileSystem attribute from site catalog
  • [PM-1797] – use logging over print statements
  • [PM-1804] – add verbose options for development mode

Bugs Fixed

  • [PM-1709] – the yaml handler in pegasus-graphviz needs to handle ‘checkpoint’ link type
  • [PM-1722] – Job node_label attribute is not identified by the planner
  • [PM-1725] – nodeLabel for a job needs to be parsed in yaml handler if it is given
  • [PM-1736] – Pegasus pollutes the job env when getenv=true
  • [PM-1737] – monitord fails on divide by 0 error while computing avg cpu utilization
  • [PM-1745] – time.txt in stats is misformatted
  • [PM-1746] – jobs aborted by dagman, but with kickstart exitcode as 0 are not marked as failed job
  • [PM-1748] – planner fails with NPE on empty workflow
  • [PM-1750] – ensemble mgr workflow priorities need to be reversed
  • [PM-1752] – fix checkpoint.time in add_pegasus_profile
  • [PM-1754] – pegasus-db-admin fails to upgrade database
  • [PM-1761] – pegasus-analyzer showing “failed to send files” error when root cause is exec format error
  • [PM-1762] – pegasus-analyzer showing no error at all when workflow failed based on status output
  • [PM-1764] – fix pegasus-analyzer output typo
  • [PM-1765] – for SubWorkflow jobs, the planner argument, –output-sites, isn’t being set
  • [PM-1766] – for SubWorkflow jobs, the planner argument, –force, isn’t being set
  • [PM-1770] – 041-jdbcrc-performance failing
  • [PM-1772] – db upgrade leaves transient tables
  • [PM-1777] – pegasus-graphviz producing incorrect dot file when redundant edges removed
  • [PM-1779] – Stage out job executed on local instead of remote site (donut)
  • [PM-1783] – bypass input staging in nonsharedfs mode does not work for file URL and auxiliary.local set
  • [PM-1785] – hostnames missing from elasticsearch job data
  • [PM-1789] – Scratch dir GET/PUT operations get overridden
  • [PM-1795] – Output Mapper in conjunction with data dependencies between sub workflow jobs
  • [PM-1799] – json schema validation fails for selector profiles
  • [PM-1820] – Deserializing a YAML transformation files always sets the os.type to linux