Installing Sametime Meetings with Kubernetes

Before you begin

  • Kubernetes v1.16.0 with an ingress controller. See Kubernets QuickStart for a basic single node deployment.
  • Helm v3.1.2
  • Sametime Proxy v11.6
  • Sametime Chat v11.6
Note: The helm charts and templates provided were tested with 1.16. Later versions of Kubernetes might not be compatible.

About this task

Network considerations

Sametime Meetings uses UDP on port 30000 by default. You should ensure that the clients you wish to service have UDP inbound access to this port and that outbound UDP traffic from the deployment is unrestricted. Additionally, Sametime Meetings will utilize internet accessible STUN servers to help clients and the server negotiate media paths for the exchange of audio/video/appshare data. Public Google STUN servers are configured by default.

Add the following:

stun.l.google.com:19302
stun1.l.google.com:19302
stun2.l.google.com:19302

These addresses must be reachable by the container. If they are not, there may be issues joining meetings.

To change the defult STUN server see, Configuring alternate STUN servers.

Docker/Kubernetes uses internal private network addresses for the deployed services. Applications may also expose network ports directly on the Node as well. Sametime Meetings defines a LoadBalancer service for the HTTP/HTTPS traffic and a NodePort service for the media traffic. In order to expose these services to the outside world, an ingress controller is required for the HTTP/HTTPS traffic and the IP address of the node must be accessible for the media traffic.

To deploy Sametime meetings on Kubernetes

Procedure

  1. Download Sametime_meetings.zip from Flexnet.
  2. Extract the zip file to any directory on either the master kubernetes host itself or on a machine which has management access to the kubernetes cluster.
  3. Change to that directory and load the docker images into your docker registry via the command:
    ./load.sh
    Note: The load script will simply extract the docker images to the local host by default. When prompted, you should specify your own docker registry host FQDN. This may be a cloud provider registry or some other private registry accessible to all of the nodes. If you don't have your own registry, then you must run the load script on each node in the kubernetes cluster and just use the script defaults.

    1. Configure secrets for the deployment.
      ./prepareDeployment.sh
      When prompted, enter the Sametime JWT secret from your existing Sametime deployment. In the sametime.ini file, on the community server, the value is defined in the JWT_SECRET configuration item. On the proxy server, it is the value of the <jwtSecret> configuration element in stproxyconfig.xml. The value is base64 encoded in both of those locations. Copy and paste the base64 encoded value here.
      Note: To define a new secret, do not enter any value in the field.
    2. After executing this command, helm/templates/meetings-secrets.yaml will have secrets unique to this deployment. The sametimeJwt value can be found in the JwtSecret data object and should then be configured in both sametime.ini and stproxyconfig.xml on the Sametime v11.6 Chat and Proxy servers, respectively.
  4. Create the meeting recordings volume.
    Meeting recordings are stored as MP4 files in a temp directory on the meeting recorder nodes during the meeting. After the meeting, the recordings are moved to a persistent volume. Allocate a volume accessible to the kubernetes cluster that is substantial enough to handle the expected number of meeting recordings assuming a rate of about 100M per 1 hour of meeting.
    Note: By default, recordings persist for three days, so keep that in mind when sizing the volume.
    • To create a persistent volume on a self-managed k8s cluster, see Configure persistent storage for single node deployment.
    • For a cloud provider, there are various options for creating persistent storage. The end result is that you should have a persistent volume claim established in the kubernetes cluster with sufficient storage to meet your recording needs. The accessMode of the claim should be ReadWriteOnce. You may use command below to define a claim assuming a default storageClassName which should work in all cloud providers assuming you have storage available to your kubernetes cluster. Make sure to edit the file to configure the amount of storage to be sufficient for your needs:
      kubectl create -f kubernetes/storage/sametime-meetings-pvc.yaml
  5. Change to the helm directory and edit the global configuration
    Note: These values are case sensitive and must be entered in lower case.
    Configure the values.yaml file with specifics for your deployment. The following are some important fields, their description, and default values where applicable:
    • serverHostname

      This should be defined as the fully-qualified-host-name of the system as you would expect users to access via a web browser. Default is meetings.company.com

    • jwtCookieDomain

      This should be defined as the domain part of the fully-qualified-host-name. It is used for single-sign-on with the Sametime Proxy deployment. For example, if the proxy is webchat.company.com and the meeting server is meetings.company.com then the cookie domain should be company.com so that cookies can be shared between the two deployments. Default is empty, meaning no SSO is configured.

    • sametimeProxyHost

      This is the resolvable name or IP address of the Sametime Proxy v11.6 host. Default is empty.

    • sametimeProxyPort

      This is the port of the Sametime Proxy v11.6 host. Default is 443.

    • idpUrl

      If SAML is used for authentication, this is the IDP URL defined in that configuration. Default is empty.

    • jvbPort

      This is the media port used by the deployment. This defines the Kubernetes NodePort which will be used in the deployment. The value must be in the range of 30000-32767 unless you have specialized the node port range configuration. Default is 30000.

    • privateIp

      This is the network address by which your server will be accessed.

    • numberOfRecorders

      This is the fixed number of recorders and limits the number of meetings which may be recorded at one time for a given static deployment. This value should match the number of virtual sound devices you have. For more information, see Configuring the recorder on Docker and Kubernetes instances.

      If you deploy on a cloud provider, this is the default number of recorders which should match your minimum number of nodes in the recorder node group assuming a 1-to-1 configuration of pod-to-node. The number of recorder nodes will grow from this minimum, as needed, up to the maximum size of the recorder node group. Default is 5.

    • recordingsExpireInDays

      These are the number of days a meeting recording will be available for download/playback. Default is 3.

    • recordingsClaim

      This is the name of the persistent volume claim that defines the storage volume for meeting recordings. Default is sametime-meetings.

    • imageRepo

      This is the docker repository where the Sametime Meeting docker images are located. If you use a cloud provider image registry or your own private registry, you should update this setting to the base name of that image registry. Default is sametime-docker-prod.cwp.pnp-hcl.com and assumes that you have executed the ./load.sh script with its default configuration on each kubernetes node.

  6. Deploy the helm chart
    helm install sametime-meetings .
    Note:
    • The command assumes you are in the helm directory. The . represents current directory.
    • Instead of sametime-meetings you can choose any descriptive name for the deployment. You can also deploy the application in a namespace via the -n or --namespace option. First create the namespace via kubectl create namespace.
  7. Verify the service.
    Verify that at least three people can join a meeting and see and hear each other. (Two-person meetings are optimized to avoid a media path through the server and so don't provide a valid test.)

    You can verify the service from one machine:

    1. Log in to the meeting from one machine as three separate users, each in their own browser tab.
    2. Unmute any user.
    3. If you can hear the user, the service is functioning.