Job definition

A job is an executable file, program, or command that is scheduled and launched by HCL Workload Automation. You can write job definitions in edit files and then add them to the HCL Workload Automation database with the composer program. You can include multiple job definitions in a single edit file.

When creating a job, you can define it in a folder. If no folder path is specified, then the job definition is created in the current folder. By default, the current folder is the root (\) folder, but you can customize it to a different folder path. You can also use the composer rename command to move and rename jobs in batch mode that use a naming convention to a different folder using part of the job name to name the folder.

Two different job types are available: the standard HCL Workload Automation job is a generic executable file, program, or command that you can run statically, while the job types with advanced options are predefined jobs that you can use to run specific tasks, either statically or dynamically, such as file transfer operations or integration with other databases.

The job types with advanced options run only on dynamic agents, pools, dynamic pools, and remote engines.

To define standard jobs in the composer command line, you use the script and docommand arguments; to define job types with advanced options, you use the task argument.

For more information about job types with advanced options, see Extending HCL Workload Automation capabilities.

For information about how to pass variables between jobs in the same job stream instance, see Passing variables between jobs.

Note: Starting from product version 9.4, Fix Pack 1, the composer command line to create job definitions uses REST APIs. This means that when you create a job using composer, new APIs are used, which are not compatible with the APIs installed on masters with previous product versions. As a result, you cannot use a composer at version 9.4, Fix Pack1 level, to create a job definition on a master where a previous version of the product is installed.

Each job definition has the following format and arguments:

Syntax

$jobs
[[folder/]workstation#][folder/]jobname
   {scriptname filename  streamlogon username |
     docommand "command" streamlogon username |
     task job_definition }
   [description "description"]
   [tasktype tasktype]
   [interactive]

   [succoutputcond Condition_Name "Condition_Value"]
   [outputcond Condition_Name "Condition_Value"]


[recovery
{stop
[after [[folder/]workstation#][folder/]jobname]
[abendprompt "text"]]
|continue
[after [[folder/]workstation#][folder/]jobname]
[abendprompt "text"]]
|rerun [same_workstation]
[[repeatevery hhmm]  [for number attempts]]

[after [[folder/]workstation#][folder/]jobname]
|[after [[folder/]workstation#][folder/]jobname]
[abendprompt "text"]}

A job itself has no settings for dependencies, these must be added to the job when it is included in a job stream definition.

You can add or modify job definitions from within job stream definitions. Modifications to jobs definitions made in job streams definitions are reflected in the job definitions stored in the database. This means that if you modify the job definition of job1 in job stream definition js1 and job1 is used also in job stream js2, also the definition of job1 in js2 definition is modified accordingly.
Note: Wrongly typed keywords used in job definitions lead to truncated job definitions stored in the database. In fact the wrong keyword is considered extraneous to the job definition and so it is interpreted as the job name of an additional job definition. Usually this misinterpretation causes also a syntax error or a non-existent job definition error for the additional job definition.
Special attention is required in the case where an alias has been assigned to a job. You can decide to use a different name to refer to a particular job instance within a job stream, but the alias name must not conflict with the job name of another job in the same job stream. If a job definition is renamed then jobs having the same name as the job definition modify the name in accordance with the job definition name. Here are some examples to understand the behavior of jobs when the job definition name is modified:
Table 1. Examples: renaming the job definition
Original job definition names in job stream Rename job definition Outcome

SCHEDULE [folder/]WKS#/APPS/DEV/JS
:
[folder/]FTA1#/APPS/DEV1/A
[folder/]FTA1#/APPS/DEV1/B as C
END

Rename job A to D

SCHEDULE [folder/]WKS#/APPS/DEV/JS
:
[folder/]FTA1#/APPS/DEV1/D
[folder/]FTA1#/APPS/DEV1/B as C
END

Rename job B to D

SCHEDULE WKS#JS
:
FTA1#/APPS/DEV1/A
FTA1#/APPS/DEV1/D as C
END

Rename job /APPS/DEV1/A to C An error occurs when renaming job A to C because job C already exists as the alias for job B.
Note: Because a job in a job stream is identified only by its name, jobs with the same name require an alias even if their definitions have different workstations or folders.

Refer to section Job stream definition for information on how to write job stream definitions.

Arguments

[folder/]workstation#
Specifies the name of the workstation or workstation class on which the job runs. The default is the workstation specified for defaultws when starting the composer session.

For more information on how to start a composer session refer to Running the composer program.

The pound sign (#) is a required delimiter. If you specify a workstation class, it must match the workstation class of any job stream in which the job is included.

If you are defining a job that manages a workload broker job, specify the name of the workstation where the workload broker workstation is installed. Using the workload broker workstation, HCL Workload Automation can submit job in the dynamic workload broker environment using the dynamic job submission.

[folder/]jobname
Specifies the name of the folder within which the job is defined and the job name. The job name must start with a letter, and can contain alphanumeric characters, dashes, and underscores. It can contain up to 40 characters. If you generally work from a precise folder, then you can use the chfolder command to navigate to folders and sub-folders. The chfolder command changes the working directory or current folder, which is set to root ("/") by default, so that you can use relative folder paths when submitting commands. If no folder path is specified, then the job definition is created in the current folder. If a relative path is specified, the path is relative to the current folder. See chfolder for more information about changing folder paths. See Folder definition for specifications about folder names.
scriptname filename

Specifies the name of the file the job runs. Use scriptname for UNIX® and Windows® jobs. For an executable file, enter the file name and any options and arguments. The length of filename plus the length of Success Condition (of the rccondsucc keyword) must not exceed 4095 characters. You can also use HCL Workload Automation parameters.

Use this argument to define standard HCL Workload Automation jobs.

See Using variables and parameters in job definitions for more information.

For Windows® jobs, include the file extensions. Universal Naming Convention (UNC) names are permitted. Do not specify files on mapped drives.

If you are defining a job that manages a workload broker job specify the name of the workload broker job. Additionally you can specify variables and the type of affinity that exists between the HCL Workload Automation job and the workload broker job using the syntax outlined in the list below. To identify an affine job using the:
HCL Workload Automation job name
jobName [-var var1Name=var1Value,...,varNName=varNValue] [-twsAffinity jobname=twsJobName]
dynamic workload broker job ID
jobName [-var var1Name=var1Value,...,varNName=varNValue] [-affinity jobid=jobid]
dynamic workload broker job alias
jobName [-var var1Name=var1Value,...,varNName=varNValue] [-affinity alias=alias]
Refer to the HCL Workload Automation: Scheduling Workload Dynamically for detailed information.
If the file path or the file name of the scriptname argument contains spaces, the entire string must be enclosed between "\" and \" " as shown below:
scriptname "\"C:\Program Files\tws\myscript.cmd\""

If special characters are included, other than slashes (/) and backslashes (\), the entire string must be enclosed in quotes (").

The job fails if the script specified in the scriptname option is not found or does not have execute permission. It abends if the script that is not found or does not have execute permission includes parameters.

docommand command
Specifies a command that the job runs. Enter a valid command and any options and arguments enclosed in double quotation marks ("). The length of command plus the length of Success Condition (of the rccondsucc keyword) must not exceed 4095 characters. You can also enter HCL Workload Automation parameters.

Use this argument to define standard HCL Workload Automation jobs.

The job abends if the file specified with the docommand option is not found or does not have execute permission.

See Using variables and parameters in job definitions for more information.

task job_definition
Specifies the XML syntax for job types with advanced options and shadow jobs. The maximum length is 4095 characters.

To define standard job types, use the docommand or the scriptname arguments.

This argument applies only to workstations of the following types:
  • agent
  • pool
  • d-pool
  • rem-eng
The syntax of the job depends on the job type you define.

For a complete list of supported job types, see Creating advanced job definitions.

streamlogon username

The user name under which the job runs. This attribute is mandatory when scriptname or docommand are specified. The name can contain up to 47 characters. If the name contains special characters it must be enclosed in double quotation marks ("). Specify a user that can log on to the workstation on which the job runs. You can also enter HCL Workload Automation parameters.

See Using variables and parameters in job definitions for more information.

For Windows® jobs, the user must also have a user definition.

See User definition for user requirements.

If you are defining a job that manages a dynamic workload broker job, specify the name of the user you used to install dynamic workload broker.

The job fails if the user specified in the streamlogon option does not exist.

description "description"
Provides a description of the job. The text must be enclosed between double quotation marks.The maximum number of bytes allowed is 120.
tasktype tasktype
Specifies the job type. It can have one of the following values:
UNIX®
For jobs that run on UNIX® platforms.
WINDOWS
For jobs that run on Windows® operating systems.
OTHER
For jobs that run on extended agents. Refer to HCL Workload Automation for Applications: User's Guide for information about customized task types for supported vendor acquired applications.
BROKER
For jobs that manage the lifecycle of a dynamic workload broker job. Refer to HCL Workload Automation: Scheduling Workload Dynamically for information about how to use dynamic workload broker.

When you define a job, HCL Workload Automation records the job type in the database without performing further checks. However, when the job is submitted, HCL Workload Automation checks the operating system on the target workstation and defines the job type accordingly.

interactive

If you are defining a job that manages a dynamic workload broker job ignore this argument. Specifies that the job runs interactively on your desktop. This feature is available only on Windows® environments.

succoutputcond Condition_Name "Condition_Value"
A condition that when satisfied qualifies a job as having completed successfully and the job is set to the SUCC status. The condition is used when you need a successor job to start only after the successful completion of the predecessor job or job stream. They can also be used to specify alternative flows in a job stream starting from a predecessor job or job stream. The successor job is determined by which conditions the predecessor job or job stream satisfies.

When the predecessor is a job stream, the conditional dependency is only a status condition, as follows: abend, succ, and suppr. The successor job runs when the predecessor job stream status satisfies the job status specified using these arguments. You can specify one status, a combination of statuses, or all statuses. When specifying more than one status or condition name, separate the statuses or names by using the pipe (|) symbol.

You can specify any number of successful output conditions. The condition can be expressed as follows:
A return code
On fault-tolerant and dynamic agent workstations only, you can assign which return code signifies the successful completion of a job. Job return codes can be expressed in various ways:
Comparison expression
The syntax to use to specify a job return code: The syntax is:
(RC operator operand)
RC
The RC keyword.
operator
Comparison operator. It can have the following values:
Table 2. Comparison operators
Example Operator Description
RC<value < Less than
RC<=value <= Less than or equal to
RC>value > Greater than
RC>=value >= Greater than or equal to
RC=value = Equal to
RC!=value != Not equal to
RC<>value <> Not equal to
operand
An integer between -2147483647 and 2147483647.
For example, you can define a successful job as a job that ends with a return code less than or equal to 3 as follows:
succoutputcond UPDATE_OK "(RC <= 3)"
Boolean expression
Specifies a logical combination of comparison expressions. The syntax is:
comparison_expression operator comparison_expression
comparison_expression
The expression is evaluated from left to right. You can use parentheses to assign a priority to the expression evaluation.
operator
Logical operator. It can have the following values:
Table 3. Logical operators
Example Operator Result
expr_a and expr_b And TRUE if both expr_a and expr_b are TRUE.
expr_a or expr_b Or TRUE if either expr_a or expr_b is TRUE.
Not expr_a Not TRUE if expr_a is not TRUE.
For example, you can define a successful job as a job that ends with a return code less than or equal to 3 or with a return code not equal to 5, and less than 10 as follows:
succoutputcond "(RC<=3) OR ((RC!=5) AND (RC<10))"
A job state
On fault-tolerant and dynamic agent workstations only, you can assign which job state signifies the successful completion of a job.
An output variable
On dynamic agent workstations only, qualify a job as having completed successfully using output variables.
  • You can set a success condition or other condition for the job by analyzing the job properties.
    For example, for a file transfer job specifically, you enter the following expression:
    
    ${this.File.1.Size}>0
    if you want to qualify a file transfer job as successful when the size of the transferred file is greater than zero.
  • You can set a success or other condition for the job by analyzing the job properties or the job output of another job in the same job stream.
    For example, for a file transfer job, you enter the following expression:
    
    ${this.NumberOfTransferredFiles}=
    ${job.DOWNLOAD.NumberOfTransferredFiles}
    If you want to qualify a file transfer job as successful when the number of uploaded files in the job is the same as the number of downloaded files in another job, named DOWNLOAD, in the same job stream.
  • All Xpath (XML Path Language) functions and expressions are supported, for the above conditions, in the succoutputcond field:
    • String comparisons (contains, starts-with, matches, and so on)
    • String manipulations (concat, substring, uppercase, and so on)
    • Numeric comparison (=, !=, >, and so on)
    • Functions on numeric values (abs, floor, round, and so on)
    • Operators on numeric values (add, sum, div, and so on)
    • Boolean operators
Content in the job log
On dynamic agent workstations only, you can consider a job successful by analyzing the content of the job log.
You can set a success or unsuccessful condition for the job by analyzing the job output. To analyze the job output, you must check the this.stdlist variable. For example, you enter the following expression:

contains(${this.stdlist},"error")
The condition is satisfied if the word "error" is contained in the job output.
outputcond Condition_Name "Condition_Value"
An output condition that when satisfied determines which successor job runs. The condition is expressed as Condition_Name "Condition_Value". The format for the condition expression is the same as that for the succoutputcond conditions. The following are some examples of output conditions. For example, to create a condition that signifies that the predecessor job has completed with errors, you define the output condition as follows:
  • outputcond STATUS_ERR1 "RC=1" to create a condition named STATUS_ERR1 that signifies that if the predecessor job completes with return code = 1, then the job completed with errors.
  • outputcond BACKUP_FLOW "RC != 5 and RC > 2" to create a condition named BACKUP_FLOW. If the predecessor job satisfies the condition then the successor job connected to the predecessor with this conditional dependency runs.
recovery
Recovery options for the job. The default is stop with no recovery job and no recovery prompt. Enter one of the recovery options, stop, continue, or rerun. This can be followed by a recovery job, a recovery prompt, or both.
stop
If the job ends abnormally, do not continue with the next job.
continue
If the job ends abnormally, continue with the next job. The job is not listed as abended in the properties of the job stream. If no other problems occur, the job stream completes successfully.
rerun
If the job ends abnormally, rerun the job. You can use it in association with the after [folder/][workstation#][folder/]jobname and repeatevery hhmm options, or with the after [folder/][workstation#][folder/]jobname and abendprompt "text" options. You can optionally specify one or more of the following options to define a rerun sequence:
same_workstation
Specifies whether the job must be rerun on the same workstation as the parent job. This option is applicable only to pool and dynamic pool workstations.
repeatevery hhmm
Specifies how often HCL Workload Automation attempts to rerun the failed job. The default value is 0. The maximum supported value is 99 hours and 59 minutes. The countdown for starting the rerun attempts begins after the parent job, or the recovery job if any, has completed.
for number attempts
Specifies the maximum number of rerun attempts to be performed. The default value is 1. The maximum supported value is 10.000 attempts.
If you specify a recovery job and both the parent and recovery jobs fail, the dependencies of the parent job are not released and its successors, if any, are not run. If you have set the rerun option, the rerun is not performed. In this case, you must manually perform the following steps:
  1. Manually confirm the recovery job is in SUCC state.
  2. Clean up the environment by performing manually the operations that were to be performed by the recovery job.
  3. Submit a rerun of the parent job.
after [folder/][workstation#][folder/]jobname

Specifies the name of a recovery job to run if the parent job ends abnormally. Recovery jobs are run only once for each abended instance of the parent job.

You can specify the recovery job's workstation if it is different from the parent job's workstation. The default is the parent job's workstation. Not all jobs are eligible to have recovery jobs run on a different workstation. Follow these guidelines:
  • If the job has a recovery job in another workstation, not only the recovery job workstation needs to have the value on for fullstatus but also the parent job's workstation needs to have its value on for fullstatus.
  • If either workstation is an extended agent, it must be hosted by a domain manager or a fault-tolerant agent with a value of on for fullstatus.
  • The recovery job workstation can be in the same domain as the parent job workstation or in a higher domain.
  • If the recovery job workstation is a fault-tolerant agent, it must have a value of on for fullstatus.
abendprompt "text"
Specifies the text of a recovery prompt, enclosed between double quotation marks, to be displayed if the job ends abnormally. The text can contain up to 64 characters. If the text begins with a colon (:), the prompt is displayed, but no reply is required to continue processing. If the text begins with an exclamation mark (!), the prompt is displayed, but it is not recorded in the log file. You can also use HCL Workload Automation parameters.

See Using variables and parameters in job definitions for more information.

Recovery options and actions summarizes all possible combinations of recovery options and actions.

The table is based on the following criteria from a job stream called sked1:
  • Job stream sked1 has two jobs, job1 and job2.
  • If selected for job1, the recovery job is jobr.
  • job2 is dependent on job1 and does not start until job1 has completed.
Table 4. Recovery options and actions
Stop Continue Rerun
Recovery prompt: No Recovery job: No Intervention is required. Run job2. Rerun job1. If job1 ends abnormally, issue a prompt. If reply is yes, repeat above. If job1 is successful, run job2.
Recovery prompt: Yes Recovery job: No Issue recovery prompt. Intervention is required. Issue recovery prompt. If reply is yes, run job2. Issue recovery prompt. If reply is yes, rerun job1. If job1 ends abnormally, repeat above. If job1 is successful, run job2.
Recovery prompt: No Recovery job: Yes Run jobr. If it ends abnormally, intervention is required. If it is successful, run job2. Run jobr. Run job2. Run jobr. If jobr ends abnormally, intervention is required. If jobr is successful, rerun job1. If job1 ends abnormally, issue a prompt. If reply is yes, repeat above. If job1 is successful, run job2.
Recovery prompt: Yes Recovery job: Yes Issue recovery prompt. If reply is yes, run jobr. If it ends abnormally, intervention is required. If it is successful, run job2. Issue recovery prompt. If reply is yes, run jobr. Run job2. Issue recovery prompt. If reply is yes, run jobr. If jobr ends abnormally, intervention is required. If jobr is successful, rerun job1. If job1 ends abnormally, repeat above. If job1 is successful, run job2.
Notes:
  1. "Intervention is required" means that job2 is not released from its dependency on job1, and therefore must be released by the operator.
  2. The continue recovery option overrides the ends abnormally state, which might cause the job stream containing the ended abnormally job to be marked as successful. This prevents the job stream from being carried forward to the next production plan.
  3. If you select the rerun option without supplying a recovery prompt, HCL Workload Automation generates its own prompt.
  4. To reference a recovery job in conman, use the name of the original job (job1 in the scenario above, not jobr). Only one recovery job is run for each abnormal end.

Examples

The following is an example of a file containing two job definitions:
$jobs
cpu1#gl1
     scriptname "/usr/acct/scripts/gl1"
     streamlogon acct
     description "general ledger job1"
bkup
     scriptname "/usr/mis/scripts/bkup"
     streamlogon "^mis^"
     recovery continue after myfolder/recjob1 
The following example shows how to define the HCL Workload Automation TWSJOB job stored in the APP/DEV folder that manages the workload broker broker_1 job that runs on the same workload broker agent where the TWSJOB2 ran:
ITDWBAGENT#APP/DEV/TWSJOB
SCRIPTNAME "broker_1 -var var1=name,var2=address
            -twsaffinity jobname=TWSJOB2"
STREAMLOGON brkuser
DESCRIPTION "Added by composer."
TASKTYPE BROKER
RECOVERY STOP
The following example shows how to define a job which is assigned to a dynamic pool of UNIX agents and runs the df script:
DPOOLUNIX#JOBDEF7
 TASK
 <?xml version="1.0" encoding="UTF-8"?>
    <jsdl:jobDefinition
          xmlns:jsdl="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdl"
          xmlns:jsdle="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdle">
    <jsdl:application name="executable">
    <jsdle:executable interactive="false">
    jsdle:script>df</jsdle:script>
    </jsdle:executable>
    </jsdl:application>
    </jsdl:jobDefinition>
 DESCRIPTION "Added by composer."
 RECOVERY STOP
The following example shows how to define a job which is assigned to a dynamic pool of Windows agents and runs the dir script:
DPOOLWIN#JOBDEF6
 TASK
    <?xml version="1.0" encoding="UTF-8"?>
    <jsdl:jobDefinition
          xmlns:jsdl="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdl"
          xmlns:jsdle="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdle">
    <jsdl:application name="executable">
    <jsdle:executable interactive="false">
    jsdle:script>dir</jsdle:script>
    </jsdle:executable>
    </jsdl:application>
    </jsdl:jobDefinition>
 DESCRIPTION "Added by composer."
 RECOVERY STOP
The following example shows how to define a job which is assigned to the NC115084 agent and runs the dir script:
NC115084#JOBDEF3
 TASK
    <?xml version="1.0" encoding="UTF-8"?>
    <jsdl:jobDefinition
          xmlns:jsdl="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdl"
          xmlns:jsdle="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdle">
    <jsdl:application name="executable">
    <jsdle:executable interactive="false">
    jsdle:script>dir</jsdle:script>
    </jsdle:executable>
    </jsdl:application>
    </jsdl:jobDefinition>
 DESCRIPTION "Added by composer."
 RECOVERY STOP
The following example shows how to define a job which is assigned to a pool of UNIX agents and runs the script defined in the script tag:
POOLUNIX#JOBDEF5
 TASK
    <?xml version="1.0" encoding="UTF-8"?>
    <jsdl:jobDefinition
          xmlns:jsdl="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdl"
          xmlns:jsdle="http://www.ibm.com/xmlns/prod/scheduling/1.0</jsdle">
    <jsdl:application name="executable">
    <jsdle:executable interactive="false">
    <jsdle:script>#!/bin/sh
sleep 60
dir</jsdle:script>
    </jsdle:executable>
    </jsdl:application>
    </jsdl:jobDefinition>
 DESCRIPTION "Added by composer."
 RECOVERY STOP
The following example shows how to define a job which is assigned to a pool of Windows agents and runs the script defined in the script tag:
POOLWIN#JOBDEF4
 TASK
    ?xml version="1.0" encoding="UTF-8"?>
    <jsdl:jobDefinition
          xmlns:jsdl="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdl"
          xmlns:jsdle="http://www.ibm.com/xmlns/prod/scheduling/1.0</jsdle">
    <jsdl:application name="executable">
    <jsdle:executable interactive="false">
    <jsdle:script>ping -n 120 localhost</jsdle:script>
    </jsdle:executable>
    </jsdl:application>
    </jsdl:jobDefinition>
 DESCRIPTION "Added by composer."
 RECOVERY STOP

See also

From the Dynamic Workload Console you can perform the same task as described in

Creating job definitions.

For more information about how to create and edit scheduling objects, see

Designing your Workload.