Configuring a run of a Web UI test to run by using variables

After you added the test resources that you created in the desktop client to the project, you can configure a Web UI test to run by using variables from HCL OneTest Server. You must specify the variables for the Web UI test to run on mobile devices that are connected to an agent or a device cloud.

Before you begin

You must have completed the following tasks:

About this task

You can run the Web UI test on a mobile device, emulator, or simulator that is connected to an agent or a device in a device cloud. You must create the test in HCL OneTest UI, and then add the test asset to your project on HCL OneTest Server. You can then run the Web UI test on the devices that are connected to any of the following agents or clouds:

You must provide the details of the UI Test agent or the mobile cloud to which the mobile devices are connected as variables in Step 12. You can either enter the variables or use the file in which you entered the variables.

You must refer to the following tables for the variables that are required for a successful run:
Table 1. Variables for the UI Test agent
Name of the Variable Action for the Value field

RTW_WebUI_Browser_Selection

Specify the web browser and the device to use on the UI Test agent. For example, Chrome(Emulator:Pixel_3a_API_33).

appium.server.host

Specify the hostname or IP address of the UI Test agent.

appium.server.port

Specify the port on the UI Test agent that is configured to communicate with HCL OneTest Server.

Table 2. Variables for the BitBar Cloud
Name of the Variable Action for the Value field

RTW_WebUI_Browser_Selection

Specify the web browser and the device to use in the BitBar Cloud. For example, Chrome(BitBar Cloud:Google Pixel 7 Pro).

bitbar.apikey

Specify the user token generated for your BitBar Cloud account to authenticate your connection with the BitBar Cloud.

bitbar.host

Specify the hostname of the BitBar Cloud instance.

bitbar.project

Specify the name of the project that contains the recorded test.

bitbar.testrun

Specify a name for the test run that must be displayed in the BitBar Cloud dashboard for the test run.

Table 3. Variables for the BrowserStack Cloud
Name of the Variable Action for the Value field

RTW_WebUI_Browser_Selection

Specify the web browser and the device to use in the BrowserStack Cloud. For example, Chrome(BrowserStack Cloud:Google Pixel 7 Pro).

browserstack.host

Specify the URL of the BrowserStack Cloud instance.

browserstack.username

Specify the user name of your BrowserStack Cloud account.

browserstack.apikey

Specify the API key of your BrowserStack Cloud account to authenticate your connection with the BrowserStack Cloud.

browserstack.project

Specify the name of the project that contains the recorded test.
Note: This variable is optional.
Table 4. Variables for the pCloudy Cloud
Name of the Variable Action for the Value field

RTW_WebUI_Browser_Selection

Specify the web browser and the device to use in the pCloudy Cloud. For example, Chrome(pCloudy Cloud:Google Pixel 7 Pro).

pcloudy.apikey

Specify the API key that is generated for your pCloudy Cloud account to authenticate your connection with the pCloudy Cloud.

pcloudy.username

Specify the username with which you can access the pCloudy Cloud instance.

pcloudy.host

Specify the hostname of the pCloudy Cloud instance.

Table 5. Variables for the Perfecto Mobile Cloud
Name of the Variable Action for the Value field

RTW_WebUI_Browser_Selection

Specify the web browser and the device to use in the Perfecto Mobile Cloud. For example, Chrome(Perfecto Mobile Cloud:Google Pixel 7 Pro).

perfecto.securitytoken

Specify the user token generated for your Perfecto Mobile Cloud account to authenticate your connection with the Perfecto Mobile Cloud.

perfecto.host

Specify the hostname of the Perfecto Mobile Cloud instance.

Procedure

  1. Log in to HCL OneTest Server.

    The team space that contains your project is displayed.

  2. Open the project that contains the test assets, and then click Execution.
  3. Select the branch of the repository that contains the test assets.
    The test assets that are contained in the selected branch of the repository are displayed in the following tabs on the Execution page:
    Tab Description
    SUITES Lists all suites, Compound Tests, JMeter tests, JUnit tests, Postman tests, Rate Schedules, Selenium tests, or VU Schedules that are in the selected branch of the repository.
    TESTS Lists all API tests, functional tests, or performance tests that are in the selected branch of the repository.
    ADVANCED Lists all assets that are displayed when custom filters are applied for assets in the selected branch of the repository.
  4. Click the TESTS or the ADVANCED tab.
  5. Identify the test asset that you want to run by performing any of the following actions:
    • Scroll through the list.
      Tip: You can hover over the icon in the Type column to know the type of the test asset.
      Note: You can also identify the type of the asset from the icon that represents the test type as shown in the following table:
      Icon Represents Listed in the SUITES tab Listed in the TESTS tab Listed in the ADVANCED tab
      API test image of checkmark image of checkmark
      Functional test image of checkmark image of checkmark
      Performance test image of checkmark image of checkmark
      AFT Suite image of checkmark image of checkmark
      API Suite image of checkmark image of checkmark
      Compound Test image of checkmark image of checkmark
      HCL AppScan CodeSweep image of checkmark image of checkmark
      JMeter Test image of checkmark image of checkmark
      JUnit Test image of checkmark image of checkmark
      Postman test image of checkmark image of checkmark
      Rate Schedule image of checkmark image of checkmark
      image of the selenium test icon Selenium test image of checkmark image of checkmark
      VU Schedule image of checkmark image of checkmark
    • Search for the test asset by entering any text contained in the test asset name in the Search field.
    • Click the Filter icon Image of the filter icon in the SUITES or TESTS tab to filter the displayed assets based on the asset type.

      For example, select API Suite in the SUITES tab to display all API Suites or select Functional Test in the TESTS tab to display all functional tests that are in the selected branch of the repository.

    • Click the Filter icon Image of the filter icon in the ADVANCED tab, and then create a filter query by using the New filter option by performing the following steps:
      1. Click New filter.
      2. Enter a name for the filter.
      3. Select an operator, and then add a rule or a group of rules.
      4. Add or enter the relevant parameters and either select or enter the condition and the criteria for the condition.
        You can select a parameter from the following list:
        • Type
        • Test Asset Name
        • Test Asset Path
        • Last Result
        • Next Run
        • Components
      5. Save the filter query to save and apply the filter query to filter the assets based on the query.

        The test assets that match the filter criteria are displayed.

    • Retrieve and apply a saved filter query, if you have saved filter queries previously by performing the following steps:
      Note: The filter query applied previously is selected and the assets based on that filter query are displayed. To apply a different filter query, you must have created and saved the filter query.
      1. Click the Filter icon Image of the filter icon in the ADVANCED tab.

        The filter queries that you created and saved are displayed.

      2. Click the filter that you want to apply.

        The test assets that match the filter criteria are displayed.

    You have identified the test asset that you want to run.

    Clicking the test name displays the Details panel. You can view the details of the test such as the description, the branch in the repository that contains the asset, the Git details, and the details of the commits to the repository. You can also view the history of the test runs for the specific test under the History tab of the Details panel.

  6. Click the Execute icon Image of the icon. in the row of the identified test asset.
    The Execute test asset dialog is displayed.
    Notes:
    • If you have configured some or all of the settings for the current test run, and you do not want to continue with those settings, you can reset the settings by clicking Reset.

    • If you want to repeat a test run and do not want to use the saved settings from a previous run, you can reset all the saved settings to their default values by clicking Reset.

  7. Select the version of the test resources that you want to run by completing any of the following actions:
    Note: The test resources in the version can contain the test assets, datasets, AFT XML files, API environment tags, and other resources specific to projects that are created in any of the desktop products.
    • Expand the list in the Version field, find the version of the test resources, and then select the version.
      Use the following details about the version of the test resources that are displayed to identify the version that you want:
      • Commit message.
      • Tags labeled by the user for the version committed.
      • The user who committed the version to the repository.
      • Relative time of the commit. For example, 2 hours ago or 3 days ago.

      The list displays the versions of the test resources committed by all users to the branch in the repository. The versions are arranged with the latest version that is committed, and then followed by the versions committed previously.

    • Expand the list in the Version field, and then search for the version that you want to select by entering a partial or the complete commit message of that version.

      The version that matches the search criteria is displayed and it is selected for the test run.

    The default value for the version selected for the run is the latest version in the selected branch of the repository. If you do not select any version, then the latest version is selected for the test run.
    Notes:
    • If you selected a version but you do not want to use that version in the test run, you can remove the selected version by clicking the Image of the icon. icon. As a result, the default version is selected for the test run.
    • If you repeated a test or ran the test again from the Results page, then the version of the test resources that you chose for the earlier run is shown as selected. You can either retain this version or select any other version from the list. You can also remove the previous version by clicking the Image of the icon. icon.
  8. Select the time for scheduling the test run from the following options:
    • No action is required if you want to initiate the test run immediately after you click Execute.
      Important: Click Execute only after you have configured the other settings in this dialog.
      Note: The default time for scheduling a run is Now.
    • Select Schedule and perform the following actions if you want to schedule a single test run or configure recurring test runs:
      Schedule Actions
      A single test run
      Perform the following steps:
      1. Click the Calendar icon Image of the calendar icon. in the row of the Start field.
      2. Select the date.
      3. Select the time at which the test run must start.
      4. Select the Never option in the Repeat every field.
      5. Click Save.

      The test run is scheduled to run at the selected time on the scheduled date.

      Recurring test runs
      Perform the following steps:
      1. Click the Calendar icon Image of the calendar icon. in the row of the Start field.
      2. Select the date.
      3. Select the time at which the test run must start.
      4. Set the frequency at which the test runs must run by entering the number in the Repeat every field, and then select the period from the list. You can select from the following options:
        • Minute
        • Hour
        • Day
        • Week
        • Month

        For example, if you want the test run to be run every day at the set time, enter 1 and, then select the Day(s) option.

        You can also schedule the test to run at the set time on specific weekdays by selecting the Week(s) option. For example, the following image displays the days selected for the test runs as Monday, Wednesday, and Friday:Image of the weekday selection for recurring test runs
      5. Set the option to end the recurring test runs from the following options:
        • Select the Never option, if you do not want the recurring test runs to end.
        • Select the On option, and then click the Calendar icon Image of the calendar icon.. You can select the date and time after which the scheduled test runs do not run.
      6. Click Save.

      The recurring test runs are scheduled to start the first run at the selected time on the scheduled date, and to repeat the run at the frequency that you set.

  9. Enter a label for the test run that helps you to identify the test on the Results page.

    After the test run completes, the label that you entered is displayed for the test under the Labels column on the Results page. After you have created a label, any member of the project can use that label.

    The default value for the Label field is null or an empty field.

    Important: The configuration that you set for the test run in the Execute test asset dialog is preserved when you run the same test again. Those changes are not visible when another user logs in to HCL OneTest Server. For example, if you created new variables on the server, those variables are available only for you when the same test is run again.

    If you want to run the test immediately or at the scheduled time, click Execute, or continue with the next step.

  10. Click Advanced to make the following advanced configurations:
    1. Enter any JVM arguments that must be passed to the test run at run time in the JVM Arguments field, if applicable for the test.

      For example, you can set a maximum Java heap size.

    2. Enter program arguments that must be passed to the test run at run time in the Program Arguments, if applicable for the test.
      If the test that you want to configure supports the presentation of the test results data in different formats, then do the required tasks in the following scenarios:
      If... Then...
      You want the test results data in different supported formats.

      Enter -history cisterna in the Program Arguments field.

      You want to obtain the test results as a Jaeger trace.

      Enter -history jaeger in the Program Arguments field.

      You want the report as a test log and as a Jaeger trace.

      Enter -history jaeger,testlog in the Program Arguments field.

      Jaeger is already set as a program argument and you want to remove the Jaeger trace for your test.

      Delete the -history jaeger entry in the Program Arguments field.

      Note: The default report format is the test log format for the test reports.
    3. Enter the environment variables that must be passed to the test run at run time in the Environment Variables field, if applicable for the test.

      For example, enter the environment variables when the third-party libraries that are used in the test run refer to the environment variables for configuration.

    Note: You must separate the arguments or variables with a white space when you enter them in the same line or start each argument or variable on a new line.

    The default value for each of the fields for the advanced settings is null or an empty field.

    If you want to run the test immediately or at the scheduled time, click Execute, or continue with the next step.

  11. Follow the instructions if you are running a test asset that contains datasets:
    1. Click the DATA SOURCES tab, if it is not already open.
    2. Consider the following information about datasets before you select a dataset:

      The default value for the datasets in the DATA SOURCES tab is null if the test asset did not have an associated dataset. If the asset had an associated dataset, the default value is the associated dataset.

      You can utilize the dataset stored as an Excel or CSV file to override the original dataset associated with the Suite, test, or schedule. For example, when you have associated a dataset in .xlsx, .xls, or .csv format with the test or schedule in desktop clients and if you have another set of data stored in an Excel or CSV file, then you can select that dataset from the Override list.
      Remember: You must have uploaded the dataset as an Excel or CSV file into the Git repository, and ensured that both the original dataset (from the test asset) and new datasets (added to the project) have the same column names.
    3. Select the dataset that you want to use in the test run from any of the following options:
      • Select the dataset that is displayed as the default dataset when the test asset contains a single dataset.
        Note: If there is only one dataset in the test asset, then that dataset is displayed as the default dataset.
      • Select the dataset from the list.
        Note: If there are multiple datasets in the test asset, the datasets are listed in their increasing alphabetical order.
      • Select the dataset from the Override list to override the dataset that was associated with the test in the desktop client.
        Important: If the test contains an encrypted dataset, the Project Owner must classify it in the DATA SECURITY tab on the Project page before you can select it. You must have added datasets to your project from the Dataset page for the datasets to be displayed in the Override list.

    If you want to run the test immediately or at the scheduled time, click Execute, or continue with the next step.

  12. Perform the following steps to provide the variables that specify the UI Test agent or cloud to which the mobile device is attached. You can either enter the variables that must be passed to the test at the test run time or import the file that contains the variables.
    1. Click the VARIABLES tab, if it is not already open.
    2. Choose one of the following methods to add the variables:
      • To add new variables manually, click the Add Variable icon Image of the Add icon, enter the name, and value of the variable.
      • To add new variables from your local computer or from the Git repository that is associated with your server project, click the Upload icon Image of the icon and select the Upload from local system or Browse from server to select the variable file.
        Note: You must have created a file with the variables before you can select the file.
  13. Follow the instructions if you want to export the test results to a Jira issue in your Xray project in Jira:
    1. Click the RESULTS tab.
    2. Perform the actions listed in the following table to select an option to export the test results:
      Export options Results in... Actions

      Jira Xray

      The test results of the test that you configure for a run are exported to the selected issue after the test run is completed. The results are exported to the selected issue that exists in your Xray project in Jira.

      Perform the following steps:
      1. Select the Jira Xray option.

        The Select issue field is displayed.

      2. Click the field to view the list of issues in the Xray project in Jira. Alternatively, you can enter the issue key in the Jira Xray project.
      3. Select the issue to which you want to export the test results.
      4. Continue with the next step.

      New Jira item

      The test results of the test that you configure for a run are exported to a new Jira issue. The issue is created in your project in Jira after the test run is completed.

      Perform the following steps:
      1. Select the New Jira item option.
      2. Continue with the next step.

      None

      The test results are not exported.
      Note: This is the default option.
      Perform the following steps when:
      • You do not want to export the test results.
      • You selected a Jira issue in a previous run of the same test, and you do not want to export the results of the current run.
      1. Select the None option.
      2. Continue with the next step.
  14. Perform any of the following actions that depend on whether you are using a remote performance agent or a mobile device cloud:
    • Go to Step 15, if you are using an agent.
    • Go to Step 16, if you are using a mobile device cloud.
  15. Follow the instructions if you want to run the tests on the remote performance agent that is connected either to the default Kubernetes cluster or the remote Kubernetes cluster:
    1. Click the LOCATION tab, if it is not already open.
      The performance agents that are configured in the test asset are listed under the Host column. The information about the availability and capabilities of the agent are displayed.
      Note: You must have added performance agents to your project from the Agents and Intercepts page for the performance agents to be displayed under the Override column.

      The default value for the performance agents is null or an empty field if no performance agents were configured in the test asset. If the test asset contains performance agents that are configured, then the default performance agent is the first item to be displayed on the list of performance agents listed in increasing alphabetical order.

    2. Select the performance agent where you want to run the test asset in the following scenarios:
      If... Then... Action
      The performance agent that is specified in the test assets is available and no other performance agents are added to the project. The performance agent that is specified in the test assets is displayed in the Override column. Select the performance agent in the Host that is also displayed in the Override column as the location to run the test.
      The performance agent that is specified in the test assets is not available and no other performance agents are added to the project. There is no performance agent displayed for override in the Override column. You cannot run the test on the performance agent.

      You must wait until the performance agent is available to run the test on the performance agent.

      The performance agent that is specified in the test assets is available and there are other performance agents added to the project. Performance agents that have capabilities that match with the performance agent capabilities specified in the test assets are listed in the Override column as follows:
      • Performance agents with capabilities that best match the capabilities of the agent in the asset are at the top of the list.
      • Performance agents with capabilities that do not match with the capabilities of the agent in the asset are listed subsequently.
      You can find details of both performance agents for the matching capabilities when you hover over the agent in the Override column.
      Perform any of the following actions:
      • Select the performance agent in the Host that is also displayed in the Override column as the location to run the test.
      • Select a performance agent in the Override column as the location to run the test.
    3. Click Execute.
  16. Click Execute.
    The test run is initiated.

Results

You have started or scheduled a test run of a Web UI test to run on a device that is connected to an agent or a device cloud.

What to do next

You can perform any of the following tasks: