Installing Sametime Meetings with Kubernetes

Before you begin

  • Kubernetes v1.16.0 or later with an ingress controller (see Kubernets QuickStart for a basic single node deployment)
  • Helm v3.1.2
  • Sametime Proxy v11.5
  • Sametime Chat v11.5

About this task

Network considerations

Sametime Meetings uses UDP on port 30000 by default. You should ensure that the clients you wish to service have UDP inbound access to this port and that outbound UDP traffic from the deployment is unrestricted. Additionally, Sametime Meetings will utilize internet accessible STUN servers to help clients and the server negotiate media paths for the exchange of audio/video/appshare data. Public Google STUN servers are configured by default.

Add the following:

stun.l.google.com:19302
stun1.l.google.com:19302
stun2.l.google.com:19302

These addresses must be reachable by the container. If they are not, there may be issues joining meetings.

To change the defult STUN server see, Configuring alternate STUN servers.

Docker/Kubernetes uses internal private network addresses for the deployed services. Applications may also expose network ports directly on the Node as well. Sametime Meetings defines a LoadBalancer service for the HTTP/HTTPS traffic and a NodePort service for the media traffic. In order to expose these services to the outside world, an ingress controller is required for the HTTP/HTTPS traffic and the IP address of the node must be accessible for the media traffic.

To deploy Sametime meetings on Kubernetes

Procedure

  1. Download Sametime_meetings.zip from Flexnet.
  2. Extract the zip file to any directory on either the master kubernetes host itself or on a machine which has management access to the kubernetes cluster.
  3. Change to that directory and load the docker images into your docker registry via the command:
    ./load.sh
    Note: The load script will simply extract the docker images to the local host by default. When prompted, you should specify your own docker registry host FQDN. This may be a cloud provider registry or some other private registry accessible to all of the nodes. If you don't have your own registry, then you must run the load script on each node in the kubernetes cluster and just use the script defaults.
  4. Configure secrets for the deployment.
    ./generateSecrets.sh
    Enter the Sametime JWT secret from your existing Sametime deployment. In the sametime.ini file, on the community server, the value is defined in the JWT_SECRET configuration item. On the proxy server, it is the value of the <jwtSecret> configuration element in stproxyconfig.xml. The value is base64 encoded in both of those locations. Copy and paste the base64 encoded value here.
    Note: To define a new secret, do not enter any value in the field.

    After executing this command, helm/templates/meetings-secrets.yaml will have secrets unique to this deployment. The sametimeJwt value can be found in the JwtSecret data object and should then be configured in both sametime.ini and stproxyconfig.xml on the Sametime v11.5 Chat and Proxy servers, respectively.

  5. Create the meeting recordings volume.
    Meeting recordings are stored as MP4 files in a temp directory on the meeting recorder nodes during the meeting. After the meeting, the recordings are moved to a persistent volume. You should allocate a volume accessible to the kubernetes cluster which is substantial enough to handle the expected number of meeting recordings assuming a rate of about 100M per 1 hour of meeting.
    Note: By default, recordings are persisted for 3 days, so keep that in consideration as well when sizing the volume.

    To create a persistent volume on a self-managed k8s cluster, see Configure persistent storage for single node deployment.

    For a cloud provider, there are various options for creating persistent storage. The end result is that you should have a persistent volume claim established in the kubernetes cluster with sufficient storage to meet your recording needs. The accessMode of the claim should be ReadWriteOnce. You may use command below to define a claim assuming a default storageClassName which should work in all cloud providers assuming you have storage available to your kubernetes cluster. Make sure to edit the file to configure the amount of storage to be sufficient for your needs:

    kubectl create -f kubernetes/storage/sametime-meetings-pvc.yaml
  6. Change to the helm directory and edit the global configuration
    The file is called values.yamland should be configured with specifics for your deployment. The following are some important fields, their description, and default values where applicable:
    • serverHostname

      This should be defined as the fully-qualified-host-name of the system as you would expect users to access via a web browser. Default is meetings.company.com

    • jwtCookieDomain

      This should be defined as the domain part of the fully-qualified-host-name. It is used for single-sign-on with the Sametime Proxy deployment. For example, if the proxy is webchat.company.com and the meeting server is meetings.company.com then the cookie domain should be company.com so that cookies can be shared between the two deployments. Default is empty, meaning no SSO is configured.

    • sametimeProxyHost

      This is the resolvable name or IP address of the Sametime Proxy v11.5 host. Default is empty.

    • sametimeProxyPort

      This is the port of the Sametime Proxy v11.5 host. Default is 443.

    • idpUrl

      If SAML is used for authentication, this is the IDP URL defined in that configuration. Default is empty.

    • jvbPort

      This is the media port used by the deployment. This defines the Kubernetes NodePort which will be used in the deployment. The value must be in the range of 30000-32767 unless you have specialized the node port range configuration. Default is 30000.

    • privateIp

      This is the network address by which your server will be accessed.

    • numberOfRecorders

      This is the fixed number of recorders and limits the number of meetings which may be recorded at one time for a given static deployment. This value should match the number of virtual sound devices you have configured on the host.

      If you deploy on a cloud provider, this is the default number of recorders which should match your minimum number of nodes in the recorder node group assuming a 1-to-1 configuration of pod-to-node. The number of recorder nodes will grow from this minimum, as needed, up to the maximum size of the recorder node group. Default is 5.

    • recordingsExpireInDays

      These are the number of days a meeting recording will be available for download/playback. Default is 3.

    • recordingsClaim

      This is the name of the persistent volume claim that defines the storage volume for meeting recordings. Default is sametime-meetings.

    • imageRepo

      This is the docker repository where the Sametime Meeting docker images are located. If you use a cloud provider image registry or your own private registry, you should update this setting to the base name of that image registry. Default is sametime-docker-prod.cwp.pnp-hcl.com and assumes that you have executed the ./load.sh script with its default configuration on each kubernetes node.

  7. Deploy the helm chart
    helm install sametime-meetings .
    Note:

    The command assumes you are in the helm directory. The . represents current directory. Instead of sametime-meetings you may choose any descriptive name for the deployment. You might also consider deploying the application in a namespace via the -n or --namespace option. You would first need to create the namespace via kubectl create namespace

  8. Verify the service.
    It is important to verify that at least 3 parties can join a meeting together and see/hear each other. Sametime Meetings will optimize a 2-party call and not involve a media path through the server. It is possible to perform this verification on a single machine by using multiple browser tabs open to the same meeting. You will immediately hear microphone feedback if you allow any of the clients to be unmuted and there is a speaker producing sound that the utilized microphone will pick up. Hearing feedback when 3 parties are in the same meeting on the same machine is a good assurance that there is a media path with the server.