Hiperbatch and the Data Lookaside Facility

Hiperbatch is a z/OS® performance enhancement that works with the Data Lookaside Facility (DLF) to allow batch jobs and started tasks to share access to a data set, or data object. HCL Workload Automation for Z provides control information to DLF concerning which operations are allowed to connect to which DLF object, and which data sets are eligible for Hiperbatch.

Within HCL Workload Automation for Z, a data set that is eligible for Hiperbatch is treated as a resource. Using the RESOURCES panel, you can define data sets with the DLF attribute. The DLF exit sample, EQQDLFX, can then make the following decisions about DLF processing:
  • Is this data set eligible for Hiperbatch?
  • Should this operation be connected to this data object?

HCL Workload Automation for Z issues enqueues on the job and data set name to notify the DLF exit that the job to be scheduled will use Hiperbatch. When the job ends, HCL Workload Automation for Z checks if the same data set will be used by the immediate successor operation or by any other ready operation. If so HCL Workload Automation for Z does not purge the data object. Otherwise, HCL Workload Automation for Z initiates purge processing of the data object (that is, HCL Workload Automation for Z removes it from Hiperspace). For details about installing HCL Workload Automation for Z Hiperbatch support, see Customization and Tuning.

Note: The controller can create DLF objects on any system in the controller's global resource serialization (GRS) ring, but operations that need to connect to the object must run on the same system as the controller.