Starting production

About this task

This section provides you with a step-by-step path of basic operations you can perform quickly implement HCL Workload Automation in your environment using the command-line interface. It is assumed that:
  • These steps are performed on the master domain manager immediately after successfully installing the product on the systems where you want to perform your scheduling activities.
  • The user ID used to perform the operations is the same as the one used for installing the product.

If you are not familiar with HCL Workload Automation you can follow the non-optional steps to define a limited number of scheduling objects, and add more as you become familiar with the product. You might start, for example, with two or three of your most frequent applications, defining scheduling objects to meet their requirements only.

Alternatively, you can use the Dynamic Workload Console to perform both the modeling and the operational tasks. Refer to the corresponding product documentation for more information.

The first activity you must perform is to access the HCL Workload Automation database and to define the environment where you want to perform your scheduling activities using the HCL Workload Automation scheduling object types. To do this perform the following steps:
  1. Set up the HCL Workload Automation environment variables

    Run one of the following scripts:

    . ./TWS_home/tws_env.sh for Bourne and Korn shells in UNIX®

    . ./TWS_home/tws_env.csh for C shells in UNIX®

    TWS_home\tws_env.cmd in Windows®

    in a system shell to set the PATH and TWS_TISDIR variables.

  2. Connect to the HCL Workload Automation database
    You can use the following syntax to connect to the master domain manager as TWS_user:
    composer -user <TWS_user> -password <TWS_user_password>
    where TWS_user is the user ID you specified at installation time.
    Note: If you want to perform this step and the following ones from a system other than the master domain manager you must specify the connection parameters when starting composer as described in Setting up options for using the user interfaces.
  3. Optionally add in the database the definitions to describe the topology of your scheduling environment in terms of:
    • Domains

      Use this step if you want to create a hierarchical tree of the path through the environment. Using multiple domains decreases the network traffic by reducing the communications between the master domain manager and the other workstations. For additional information, refer to Domain definition.

    • Workstations

      Define a workstation for each machine belonging to your scheduling environment with the exception of the master domain manager which is automatically defined during the HCL Workload Automation installation. For additional information, refer to Workstation definition. The master domain manager is automatically defined in the database at installation time.

  4. Optionally define the users allowed to run jobs on Windows® workstations

    Define any user allowed to run jobs using HCL Workload Automation by specifying user name and password. For additional information, refer to User definition.

  5. Optionally define calendars

    Calendars allow you to determine if and when a job or a job stream has to run. You can use them to include or exclude days and times for processing. Calendars are not strictly required to define scheduling days for the job streams (simple or rule run cycles may be used as well); their main goal is to define global sets of dates that can be reused in multiple job streams. For additional information refer to Calendar definition.

  6. Optionally define parameters, prompts, and resources

    For additional information refer to Variable and parameter definition, Prompt definition, and Resource definition.

  7. Define jobs and job streams

    For additional information refer to Job, and to Job stream definition.

  8. Optionally define restrictions and settings to control when jobs and job streams run.
    You can define dependencies for jobs and job streams. There can be up to 40 dependencies for a job stream. If you need to define more than 40 dependencies, you can group them in a join dependency. In this case, the join is used simply as a container of standard dependencies and therefore any standard dependencies in it that are not met are processed as usual and do not cause the join dependency to be considered as suppressed. For more information about join dependencies, see Joining or combining conditional dependencies and join.They can be:
    • Resource dependencies
    • File dependencies
    • Job and job stream follow dependencies, both on successful completion of jobs and job streams and on satisfaction of specific conditions by jobs and job streams
    • Prompt dependencies
    You can define time settings for jobs and job streams to run in terms of:
    • Run cycles
    • Time constraints
    You can tailor the way jobs run concurrently either on a workstation or within a job stream by setting:
    • Limit
    • Priority
  9. Automate the plan extension at the end of the current production term
    Add the final job stream to the database to perform automatic production plan extension at the end of each current production term by running the following command:
    add Sfinal
    For additional information, refer to Automating production plan processing.
  10. Generate the plan

    Run the JnextPlan command to generate the production plan. This command starts the processing of the scheduling information stored in the database and creates the production plan for the time frame specified in the JnextPlan command. The default time frame is 24 hours. If you automated the plan generation as described in the previous step, you only need to run the JnextPlan command the first time.

When you complete this step-by-step process, your scheduling environment is up and running, with batch processing of an ordered sequence of jobs and job streams being performed against resources defined on a set of workstations, if defined. By default, the first time you run the JnextPlan command, the number of jobs that can run simultaneously on a workstation is zero, so make sure that you increase this value by changing the limit cpu to allow job execution on that workstation, see the section limit cpu for more details.

If you want to modify anything while the production plan is already in process, use the conman program. While the production plan is processing across the network you can still continue to define or modify jobs and job streams in the database. Consider however that these modifications will only be used if you submit the modified jobs or job streams, using the command sbj for jobs or sbs for job streams, on a workstation which has already received the plan, or after a new production plan is generated using JnextPlan. See Managing objects in the plan - conman for more details about the conman program and the operations you can perform on the production plan in process.