1. Pegasus 5.1.x Series
1.1. Pegasus 5.1.2
Release Date: Feb 3rd, 2026
We are happy to announce the release of Pegasus 5.1.2. It is a minor release in the 5.1 branch. We invite our users to give it a try.
The release can be downloaded from: https://pegasus.isi.edu/downloads
1.1.1. Highlights of the Release
Move to Condor File IO for OSDF transfers.
OSDF transfers are now always delagated to HTCondor to manage using HTCondor file IO, especially when turning on Bypass Input File Staging. This applicable both for condorio and nonsharedfs data configurations.
More details can be found in the documentation.
Support for Flux This release of Pegasus has support for running workflows on HPC resources managed by Flux resource manager. This support relies on changes to HTCondor which will be made available in an upcoming 25.7.0 release scheduled for March 2026.
Details of mapping Pegasus resource profiles to flux parameters can be found here
This repository has useful scripts that a user can use to deploy pegasus in user mode on Flux Systems.
Modified Pegasus versioning scheme to use Semantic Versioning Scheme v2 #2126
Support for Python 3.14
1.1.2. New Features and Improvements
avoid parsing sub workflows into memory when parsing the top level workflow that includes them #2148
flux support #2143
Move OSDF transfers to be via condor file transfers in Pegasus Lite instead of relying on pegasus-transfer #2141
pick user provided env script for PegasusLite from the site where the job runs #2136
limit the number of pegasus-monitord launches in pegasus-dagman #2134
Add support for Python 3.14 #2128
Modify Pegasus versioning scheme to use Semantic Versioning Scheme v2 #2126
enable condorio support for bosco/ssh style job submissions for remote HPC clusters #2121
use job classad variables to shorten paths in transfer_input_files key in the job submit directories #2120
Explore condor_dag_checker - something we want to use as part of planning? #2116
update sqlite jar to latest stable 3.49.1.0 or higher #2109
[PM-1833] pmc cpuinfo invalid detection #1946
[PM-1098] encrypt the credentials when transferred with jobs #1212
[PM-1092] Ask Condor team to propagate glite errors in gridmanager #1206
1.1.3. Bugs Fixed
enforce maximum document parsing size for yaml docs to 2047 MB #2152
cpu atttributes are not included in the job composite event #2150
ensure user JAVA_HEAPMAX and JAVA_HEAPMIN values are propagated to the planner invocations for sub workflows #2147
bypass in condorio mode gets incorrectly triggered if a directory path specified for a local file #2142
cleanup jobs running remotely in nonsharedfs get associated with a container #2137
CLI tools pollute PYTHONPATH #2135
condor quoting is not triggered for arguments for glite and ssh style jobs #2132
monitord fails to parse job.out file if location record is malformed #2131
pegasus-init remote cluster option creates incorrect paths for local site #2125
worker package staging broken in sharedfs for create dir job (if set to run remotely) #2124
monitord overwrites transfer_attempts records found in the job.out file #2123
pegasus-analyzer –debug-job option broken #2122
pegasus-mpi-cluster: cpuinfo validation fails on hybrid CPU architectures #2119
1.1.4. Merged Pull Requests
1.2. Pegasus 5.1.1 and 5.1.0
Release Date: May 29, 2025
We are happy to announce the release of Pegasus 5.1.1. It is a minor release on top of Pegasus 5.1.0 which is a major release of Pegasus. It also includes all features and bug fixes from the 5.0 branch. We invite our users to give it a try.
We recommend that users upgrade to 5.1.1 and not 5.1.0 because 5.1.1 has a fix for #2113 .
The release can be downloaded from: https://pegasus.isi.edu/downloads
If you are an existing user, please carefully follow these instructions to upgrade at https://pegasus.isi.edu/docs/5.1.0/user-guide/migration.html#migrating-from-pegasus-5-0-x-to-pegasus-5-1
1.2.1. Highlights of the Release
Refined Data Transfer Mechanisms for Containerized Jobs
PegasusLite now offers two distinct approaches for handling data transfers in
containerized jobs. The shift to host-based transfers as the default aims to simplify workflows and minimize the overhead associated with customizing container images.Host-Based Transfers (Default in 5.1.0): Input and output data are staged on the host operating system before launching the container. This method utilizes pre-installed data transfer tools on the host, reducing the need for additional configurations within the container.
Container-Based Transfers: Data transfers occur within the container prior to executing user code. This approach requires the container image to include necessary data transfer utilities like curl, ftp, or globus-online. Users preferring this method can set the property pegasus.transfer.container.onhost to false in their configuration files.
More details can be found in the documentation.
Integration with HTCondor’s Container Universe
Pegasus 5.1.0 introduces support for HTCondor’s container universe, which is useful in pure HTCondor environments such as PATh/OSPool whereby the container management is handled by HTCondor. This integration simplifies job submission and execution, for environments where HTCondor’s container universe is available.
This enhancement builds upon Pegasus’s initial container support introduced in version 4.8.0, reflecting ongoing efforts to improve compatibility and user experience.
More details can be found in the documentation.
pegasus-status command line tool was rewritten in python, removing Pegasus perl dependency. The new pegasus-status command has better support for showing status of hierarchical workflows.
Improved determination on what site a job runs on. Starting Pegasus 5.1.0 release, PegasusLite wrapped jobs send a location record that enables us to figure out what resource a job runs on. The location record can be found toward the end of the job .out file as a pegasus multipart record.
More details can be found in the documentation.
Please note that RPM packaging for 5.1.x series is not compatible with the 5.0.x series. If you try to update an existing 5.0.x install you will see an error similar to the trace below
dnf update pegasus
...
Running transaction check
Transaction check succeeded.
Running transaction test
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Transaction test error:
file /usr/lib64/pegasus/python from install of pegasus-5.1.0-1.el8.x86_64 conflicts with file from package pegasus-5.0.9-1.el8.x86_64
The recommended way is to first remove the 5.0.x install and then do the install.
1.2.2. New Features and Improvements
update deployment scenarios documentation to include Open OnDemand configuration #2112
Incorporate release notes into the documentation #2111
document use of containers on HPC clusters #2110
update planner worker package staging logic to default to rhel8 for linux and macos_14 for macos #2108
planner should try and visualize the workflow #2099
CVE-2025-21502 in RHEL 8 project dependency java-11-openjdk #2097
Failures when testing pegasus v5.1.0 release #2095
Improved determination of what resource the job ran on #2094
add support in pegasus-init for submission to remote SLURM cluster via SSH #2093
[PM-1999] pick automatically system executables on the local site on the basis of what PATH is set when running the planner #2091
[PM-1975] enable bypass staging of container when running in container universe #2081
[PM-1971] Update jars that have security vulnerabilities. #2077
[PM-1970] Storage constraint test failing #2076
[PM-1969] Schema doc should be for YAML and not XML #2075
[PM-1956] p-version timestamp issue #2069
[PM-1950] enable users to use container universe when running containerized jobs in pure condor environments #2063
[PM-1942] Support transfers for a job on the HOST OS instead of from within the container #2055
[PM-1901] converting pegasus-analyzer tool to API (like status) and add test suite #2014
[PM-1889] Escape command line args passed to Job #2002
[PM-1886] Recommendations for new Pegasus-Status CLI tool #1999
[PM-1882] replace the perl command line client with python client #1995
[PM-1881] remove dependency on pegasus-status command line tool , in the Workflow status function #1994
[PM-1775] add changes to the Python API to support adding of checkpoint files by pattern #1889
[PM-1589] Example workflows repository for 5.0 #1703
[PM-1974] update InPlace cleanup algorithm to delete container image from the user submit directory #2080
[PM-1967] Kickstart should kill a job gracefully before maxwalltime #2073
[PM-1955] Deprecate R API #2068
[PM-1934] source builds with multiple python3 installs #2047
[PM-1915] Convert pegasus-statistics tool to API and add test suite #2028
[PM-1914] update python workflow api to support arm64 #2027
[PM-1912] planner should keep in mind units when converting diskspace profiles #2025
[PM-1860] aws batch support needs to pick up credentials.conf correctly #1973
[PM-1819] 5.0.3 Python API Improvements #1932
[PM-1801] sqlalchemy warnings against 5.0 database #1915
[PM-1793] refactor pegasus-transfer so that it can be invoked directly from pegasus-checkpoint #1907
[PM-1782] incorporate pegasus arm builds into our build infrastructure #1896
[PM-1781] pegasus-keg sleep option #1895
[PM-1756] paths with spaces need to be escaped #1870
[PM-1690] the –json option added in pegasus-plan/run needs to be integrated into the python api client code #1804
1.2.3. Bugs Fixed
snakeyaml version version 1.32, used by Jackson 2.14 has an in-built circuit breaker that breaks parsing for large yaml documents #2113
pegasus aws batch test failing because of urllib3 incompatibility #2107
Planner should catch deep lfn common name problem when using CEDAR #2106
support for condorio deep LFN broke after move to host OS based transfers #2105
when parsing container mount points in the TC, normalize the path to ensure any duplicate / in directory paths are removed #2103
pegasus-graphviz fails on a wf generated with java dax api that has no jobs #2101
[PM-1954] Importing six.moves raises ModuleNotFoundError on Python 3.12 #2067
[PM-1968] pegasus.gridstart allows values that are not documented #2074
[PM-1952] Local universe job fail with pegasus.transfer.bypass.input.staging = true #2065
[PM-1924] API submit still drops debug info to stdout #2037
[PM-1902] Pika problem on RHEL 9 - Bump the version to 1.2.1 also #2015
[PM-1923] download form does not send metrics to metric server #2036
1.2.4. Merged pull requests
PM-1914 gitlab py format fix #111 (zaiyan-alam)
fixed gitlab lint error #110 (zaiyan-alam)
PM-1914 Added aarch64 support, added tests, updated reference guide, … #109 (zaiyan-alam)
PM-1909 added KB conversion for request_disk, updated unit test #107 (zaiyan-alam)
PM-1874 bamboo test input dir #48 (zaiyan-alam)
PM-1874 removing tabs #46 (zaiyan-alam)
PM-1874 fixing broken bamboo tests #45 (zaiyan-alam)
PM-1874 test fixes and updates #44 (zaiyan-alam)
PM-1874 More test fixes and updates #43 (zaiyan-alam)
test updates and python3 fixes #42 (zaiyan-alam)
Test updates #41 (zaiyan-alam)
PM-1874 gsiftp test updates #40 (zaiyan-alam)