Component Pack installation roadmap and requirements

Review the installation process plus the required hardware and software for the HCL Connections Component Pack.

Installation roadmap

Important: The installation instructions are intended for use with the version 6.0.0.8 or higher. Do not use these instructions to install an older release.
Your installation roadmap depends on the type of installation that you are performing:

Installation and component overview

The orchestration of the various components is managed using Kubernetes technology for managing distributed application deployments.

For information on the components and architecture of the Component Pack for Connections, see Component Pack overview.

General requirements

The following refer to general requirements.

  • Note: Microsoft Internet Explorer 11 is not supported for Orient Me.

  • These servers must all be on the same network as the dedicated IBM WebSphere® Application Server Deployment Manager used by Connections. If they are on a different network from the Deployment Manager, then you must perform some additional configuration steps to enable communication between the Kubernetes servers and the Deployment Manager.

  • Only the following combinations of operating system and Docker are supported:
    • Docker Enterprise Edition:
      • RHEL 7.6 and later
      • CentOS 7.6 and later
    • Docker Community Edition:
      • CentOS 7.6 and later

Requirements for an "all on one machine" proof-of-concept deployment

It is possible to install the Component Pack all on one machine (can be bare metal or Virtual Machine) as a proof of concept (POC). The following list specifies the minimum system requirements for the all-services-on-one-server POC installation.

Attention: The proof-of-concept system requirements are not recommended for production systems.

All services on one server:

  • 1 - Master + Worker: 16 CPU, 2.x GHZ, 64GB memory, and at least 100GB disk.

    Add 50GB+ for Device Mapper block storage.

  • Storage: Persistent volumes for Elasticsearch, Customizer, MongoDB, Zookeeper, and Solr indexes, 100GB disk.

    For POC, the persistent volumes reside on the same server as the Master.

  • Device mapper: Device Mapper is required.
    • Do not use overlay.
    • A dedicated block device (direct-lvm) is recommended. 50GB+ (loop-lvm can be used for POC environments.)

Requirements for a standard (non-high-availability) deployment

The following lists specify the requirements for a standard Component Pack deployment that includes all services but does not provide high-availability features.

The minimum requirement for installing all services for a standard non-HA deployment is three servers:

  • The following base server is required for the Kubernetes master:

    1 - Master: 4 CPU, 2.x GHZ, 16GB memory, and at least 100GB disk.

    Add 50GB+ for Device Mapper block storage.

  • The following base servers are required for Component Pack services and infrastructure:
    • 1 - Generic Worker: 8 CPU, 2.x GHZ, 24GB memory, and 100GB disk.

      Add 50GB+ for Device Mapper block storage.

    • 1 - Infrastructure Worker: 8 CPU, 2.x GHZ, 24GB memory, and 100GB disk.

      Add 50GB+ for Device Mapper block storage.

  • Storage: Persistent volumes for Elasticsearch, Customizer, MongoDB, Zookeeper, and Solr indexes, 100GB disk.

  • Device mapper: Device Mapper is required.
    • Do not use overlay.
    • A dedicated block device (direct-lvm) is required. 50GB+

      An unconfigured block device (for example, /dev/xvdf) must be set up on every master and worker node so that you can configure Docker with devicemapper direct-lvm mode.

Requirements for a high-availability (HA) deployment

The following lists specify the requirements for a standard Component Pack deployment that includes all services and provides high-availability features.

  • The following base servers are required for HA (all services):

    3 - Masters: 4 CPU, 2.x GHZ, 16GB memory, and at least 100GB disk.

    Add 50GB+ per master for Device Mapper block storage.

  • The following base servers are required for Component Pack services and infrastructure:

    3 - Generic Workers: 6 CPU, 2.x GHZ, 24GB memory, and 100GB disk.

    Add 50GB+ per master for Device Mapper block storage.

  • 3 - Infrastructure Workers: 6 CPU, 2.x GHZ, 24GB memory, and 100GB disk.

    Add 50GB+ per master for Device Mapper block storage.

  • The following base server is required for hosting persistent storage:

    Storage: For Elasticsearch, Customizer, MongoDB, Zookeeper, and Solr indexes - 150GB disk.

  • Device mapper: Device Mapper is required.
    • Do not use overlay.
    • A dedicated block device (direct-lvm) is required. 50GB+

      An unconfigured block device (for example, /dev/xvdf) must be set up on every master and worker node so that you can configure Docker with devicemapper direct-lvm mode.

  • Load Balancer: There are many configurations for load balancers, such as HAProxy and NGINX. Your cluster requirements will determine the appropriate configuration for your deployment.

Additional requirements for Customizer

If you are deploying Connections Customizer, then depending on the approach you choose, you might need an additional stand-alone server to function as the reverse proxy server:
  • 1 - Reverse proxy: for Customizer: 4 CPU, 2.x GHZ, 4GB memory - 100GB disk
For more information, see Configuring the NGINX proxy server for Customizer.

Security requirements

  • It is assumed that your network is secure.
  • Ensure that Connections and Component Pack run in a trusted network environment and limit the interfaces on which they listen for incoming connections. Allow only trusted clients to access the network interfaces and ports used in the deployment.

Firewall requirements

Kubernetes: Required ports must be open on the servers, as described in "Checking the ports", in the Kubernetes installation documentation.

Pod Networking Plugin: The pod network plugin you use may also require certain ports to be open. The reference implementation uses Calico. If you are using this pod network plugin, then you will have to open the ports as described in "Network requirements", in the Calico System requirements.

Configure Redis: The ports that your Connections deployment is running on (default 80/443) should also be opened on all worker machine firewalls in order to allow redis to be configured. They can be closed again once complete.