Troubleshooting issues

You can find information about the issues or problems that you might encounter while working with HCL OneTest Server. Details about issues, their causes and the resolutions that you can apply to fix the issues are described.

Table 1. Troubleshooting issues: installation

Problem

Cause

Solution

On Ubuntu, when you are installing the server software and you encounter errors in the scripts that are running.

At times, scripts might not appear to be running due to any of the following reasons:
  • Slow connection speeds.
  • Insufficient CPU, memory, or disk resources.
  • A firewall that was configured incorrectly is already enabled.
You can complete any of the following tasks:
  • To identify the issue, you can perform a diagnostic check by running the following command:
    journalctl -u k3s
    This command displays the log that you can use to check for the problem.
  • Run the following command to see which pods are running and which pods are not running:
    kubectl get pods -A

    Run the following command to get details about a specific pod:

    kubectl describe pod -n <namespace> <pod name>
  • Follow the on-screen instructions to resolve the errors.
  • Some issues can be solved by re-running the following script:
    sudo ./ubuntu-init.sh

On Ubuntu, DNS is not working as expected.

The DNS configuration that is used by the cluster can be displayed by using the following command:

kubectl get cm -n kube-system coredns -ojsonpath="{.data.Corefile}"
The forward setting displays the nameservers that are used. For example, you might see the following in the corefile:
 .:53 {
   :
      forward . 8.8.8.8 9.9.9.9
   :
  }

A script (ubuntu-set-dns.sh) is supplied for managing these values.

For example, to set the DNS values for the values shown in the previous example:
sudo ./ubuntu-set-dns.sh --server 8.8.8.8 --server 9.9.9.9
Note: If you do not use sudo in the command, the script runs but the configuration might be lost if the cluster is restarted.
To learn more about the behavior of the script, run the following command:
sudo ./ubuntu-set-dns.sh --help
When running helm install the created pods keep crashing, and the logs contain: ACCESS_REFUSED when trying to connect to RabbitMQ In some instances, the RabbitMQ password is not automatically setup correctly. Manually apply the necessary password:
kubectl exec -n <namespace> <release-name>-rabbitmq-0 -- rabbitmqctl change_password user \
    "$(kubectl get secret -n <namespace> <release-name>-rabbitmq -o jsonpath='{.data.rabbitmq-password}' | base64 --decode)"
Table 2. Troubleshooting issues: server administration

Problem

Cause

Solution

When a user is assigned an additional role, the change in the permissions is not observed in the browser.

You must log out of the session and log in again for the changed role to take effect.

You see the following message displayed on HCL OneTest Server:

You can’t request to join a project that has no owners

You requested to join an project that no longer has an owner. Orphaned projects occur when the project owners are deleted. This can occur, for example, when the person leaves the organization.

Ask an administrator to take ownership of the project, and then add you as a member.
Table 3. Troubleshooting issues: resource monitoring

Problem

Cause

Solution

You are not able to add a Prometheus server as a Resource Monitoring source.

The cause might be that you have not installed the Prometheus server at the time of server installation.

Verify that the Prometheus server was installed in Helm at the time of server installation. See Installing the server software on Ubuntu by using k3s. If not, consult your cluster administrator to get the Prometheus server installed and configured.

Table 4. Troubleshooting issues: configuring test runs

Problem

Cause

Solution

When you configure a run of a schedule that matches the following conditions:
  • The schedule has two user groups configured to run on static agents when the schedule was created in HCL OneTest Performance 10.1.
  • One of the user groups is disabled and the asset is committed to the remote repository.
Both the static agents are displayed as available for the test run in the Location tab of the Execute test asset dialog when only one agent that is configured for the user group must be available.
The cause might be because of the following reasons:
  • The schedule was created in HCL OneTest Performance 10.1.
  • The user group that is disabled is not removed or deleted from the test resources.
  • The agent configured on the disabled user group is already added as an agent to the server project and is available for selection.
To resolve the problem, select from either of the following methods:
  • By using HCL OneTest Performance 10.1.1.
    Perform the following steps:
    1. Open the schedule in HCL OneTest Performance 10.1.1.
    2. Save the schedule and the project.
    3. Commit your test asset to the remote repository.
    4. Proceed to configure a run for the schedule on HCL OneTest Server 10.1.1.
  • By using HCL OneTest Performance 10.1.
    Perform the following steps:
    1. Select the disabled user group.
    2. Click Remove.
    3. Save the schedule and the project.
    4. Commit your test asset to the remote repository.
    5. Proceed to configure a run for the schedule on HCL OneTest Server 10.1.1.
You have added a remote repository to your project that contains the test assets or resources of the following types:
  • Postman
  • JMeter
  • JUnit
The test assets or resources are not displayed on the Execution page for you to select the asset for a run.

This problem occurs if the server extension is not enabled. Although the extension was enabled at the time of installation of HCL OneTest Server, it was disabled subsequently by the server administrator.

You must verify if the server extension is enabled and running by running the following command:kubectl get pod -n <test-system>, where <test-system> is the namespace that you created to install the server software. The server extensions that are running are displayed.

If the server extension that you want is not running implying that the server extension is not enabled. You must enable the server extension. Contact the server administrator to enable the server extension.

Table 5. Troubleshooting issues: test or stub runs

Problem

Cause

Solution

You encounter any of the following issues:
  • When many tests are run simultaneously on the default cluster location and you observe the following issues:
    • Out of memory errors.
    • Observe that the test runs are slow with a high CPU usage.
    • The Kubernetes pods are getting evicted.
  • When you run an AFT suite that contains multiple Web UI tests and you observe the following issues:
    • Error stating that the browser might not be installed or the browser version is unsupported.
    • Error stating multiple random time-outs or an internal error.
  • When you run VU schedules that contain multiple Web UI test or large number of virtual users or a combination of Web UI, performance, and API tests, and you observe the following issue:
    • The test run hangs and the inadequacy of memory can be inferred as the cause in the logs.
The issue is seen when any of the following events occur:
  • Many tests are run in parallel.
  • The memory that is used by the tests during the test run exceeds the allocated default memory of 1 GB.
  • The default memory of the container is not adequate for the test run.
  • Pods are evicted due to low node memory.

To resolve the problem, you can increase the resource allocation for test runs by using the arguments listed in Table Increasing resource allocation.

You can enter arguments in the Additional configuration options field in the Advanced settings panel of the Execute test asset dialog when configuring a test run.

Important: The memory settings that you configure for a test run is persisted for the test when ever you run it. You must use this setting judiciously. Configuring all tests for an increased memory limit might affect subsequent test runs or cause other memory issues when tests run simultaneously.

In addition, in the JVM Arguments field under the Advanced settings you can set the maximum heap size for the test run time. For example, adding the JVM argument -Xmx3g sets the maximum heap size to 3Gi.

You are not able to run the Istio stubs from the Execution page.

The cause might be that the fully qualified domain name is not specified in the Host field for the stub when it was created. Verify and ensure to add the fully qualified domain name of the server in the Host field when the physical transport for the stub is configured in HCL OneTest API.
You are not able to run Web UI tests on a remote agent that runs on Linux because the browser fails to launch. Browser launch is prevented when the remote agent runs as root or sudo. To resolve this issue, you must perform the following steps after you have installed the remote agent:
  1. Stop the agent.
  2. Start the agent as the logged in user by performing the following steps:
    1. Open a terminal on Linux.
    2. Enter the following commands:

      /opt/HCL/HCLOneTest/Majordomo/

      sudo ./MDStop.sh

      ./MDStart.sh

Note: You can refer to the Kubernetes documentation for information about the different units that can be used for resources in the Additional configuration option fields.
Table 6. Increasing resource allocationYou can increase the resource allocation for test runs by using any of the following arguments:
Requirement Configuration option name Default value, if no value is set An example value Result of using the example value

Specifying the memory limit of the init container.

init.resource.memory.limit

1024Mi 2048Mi

Increases the memory limit of the init container from the default value to 2048Mi.

Configuring a larger memory request for the init container to avoid pod eviction.

init.resource.memory.request

64Mi

1024Mi

Increases the initial memory request for the init container from the default value to 1024Mi.

Specifying the cpu request for the init container.

init.resource.cpu.request

50m 60m

Increases the cpu request for the init container from the default value 60m.

Specifying the memory limit of the container used for the test run.
Note: If the memory limit you set is more than the default limit, you must run the following command to increase the allotted limit to 8 GB or any other value above the default value of 3 GB, at the time of installing the server or anytime later:
--set execution.template.resources.limits.memory=8Gi
You must be a server administrator to run this command.

resource.memory.limit

The larger of 3Gi or maximum heap size + 1Gi

4Gi

Changes the memory limit of the main container from the default value to 4Gi.

Specifying the memory request for the container used by the test run.

resource.memory.request

1024Mi

2048Mi

Increases the memory request for the main container from the default value to 2048Mi.

Specifying the cpu request for the main container used by the test run.

resource.cpu.request

50m 70m

Increases the cpu request for the main container from the default value to 70m.

Table 7. Troubleshooting issues: test results and reports

Problem

Cause

Solution

You are not able to view the Jaeger traces for the tests you ran.

The cause can be as follows:
  • Jaeger was not pre-installed in Red Hat OpenShift.
  • The Jaeger trace is not supported for the particular test that you ran.
Check for any of the following solutions: