Performance Tuning

Content Integration Framework provides a way to tune the performance of content event processing for the events received from Kafka topic(s). To do so, configuration properties can be managed in following file -

ASSET_PICKER_HOME/conf/events/tuning.properties

This file is laid down by Unica Platform installer with some default settings. Given below is the list of available properties & their significance. Do not set any property to blank in favor of its default value, instead put a comment sign (#) before it.

  • Kafka consumer tuning – These properties allow tuning message consumption from Kafka topics.
    Property Description
    kafka.consumer.threads.min Minimum Kafka consumer threads that will always be kept active. Default is 1, if not specified.
    Note: An additional property {service-name}.kafka.max-consumers must be set up in Platform configuration for each individual service (event source) to determine the concurrency limits for each event source. {service-name}.kafka.max-consumers should ideally be set to the total number of partitions of corresponding topics. The number specified for kafka.consumer.threads.min must be able to cater to the needs of all individual event sources thus configured using additional property in Platform ({service-name}.kafka.max-consumers).
    kafka.consumer.threads.max Maximum Kafka consumer thread count. In case application outruns the kafka.consumer.threads.min threads, a few more threads will be created. This number sets upper limit for overall thread count. Default is 32767, if not specified.
    kafka.consumer.threads.priority kafka.consumer.threads.priority Kafka consumer thread priority. Application can have many threads running for handling various things, such as requests coming from user interface, messages coming from Kafka topic etc. Thread priority determines the precedence of one thread over the other when both need CPU at the same time for their execution. This property determines the priority of overall event receipts. Default is 5, if not specified.
    • 1 - Least priority
    • 10 - Highest priority
    kafka.consumer.threads.max-idle-seconds Maximum idle time after which excessive threads over kafka.consumer.threads.min are disposed if additional threads are no longer active. Default is 60 seconds.
    kafka.consumer.max-poll-interval-ms

    Consumer poll interval in milliseconds. Default is 300000ms if not specified. This value is used for standard Kafka consumer configuration - max.poll.interval.ms.

    This setting can be overridden for individual event consumer service using {service-name}.kafka.max.poll.interval.ms additional parameter in Platform configuration.

    kafka.consumer.heartbeat-interval-ms

    Consumer thread heartbeat interval in milliseconds. This value is used for standard Kafka consumer configuration - heartbeat.interval.ms. Default is 3000ms, if not specified.

    This setting can be overridden for individual event consumer service using {service-name}.kafka.heartbeat.interval.ms additional parameter in Platform configuration.

    Note: Refer https://kafka.apache.org/documentation/#consumerconfigs for standard Kafka consumer configurations.
    kafka.consumer.session-timeout-ms

    Consumer session time out interval in milliseconds. This value is used for standard Kafka consumer configuration - session.timeout.ms. Default is 10000ms, if not specified.

    This setting can be overridden for individual event consumer service using {service-name}.kafka.session.timeout.ms additional parameter in Platform configuration.

    Note: Refer https://kafka.apache.org/documentation/#consumerconfigs for standard Kafka consumer configurations.
  • Event processor tuning – These properties allow tuning processing of events received via Webhook as well as from Kafka topics.
    Property Description
    error.kafka.topic

    Name of Kafka topic where event processing errors can be sent in addition to logging into application logs. This is an optional property & there is no default topic name considered if it is not specified.

    This value can be overridden in Platform configuration for the services listening to Kafka topic for incoming events, using {service-name}.error.kafka.topic additional parameter.

  • Kafka producer tuning – These properties help for tuning message publishing to Kafka topics.
Property Description
kafka.producer.batch-size
Message producer batch size in bytes. Default is 16384 (16KB), if not specified. This setting can be overridden for individual message producer service using {service-name}.kafka.batch.size additional parameter in Platform configuration. Value of this property is used for standard Kafka producer configuration – batch.size.
Note: Refer https://kafka.apache.org/documentation/#producerconfigs for standard Kafka producer configurations.
kafka.producer.linger-ms
Number of milliseconds to wait for gathering kafka.producer.batch-size bytes of data before sending the complete batch. Default is 0, wherein producer does not wait for complete batch of messages and sends the message immediately. This setting can be overridden for individual message producer service using {service-name}.kafka.linger.ms additional parameter in Platform configuration. Value of this property is used for standard Kafka producer configuration – linger.ms.
Note: Refer https://kafka.apache.org/documentation/#producerconfigs for standard Kafka producer configurations.
kafka.producer.compression-type

Compression type for all the data produced by the producer to any output topic. The default is none (i.e. no compression), if commented.

Valid values are - none, gzip, snappy, lz4, or zstd.

This setting can be overridden for individual message producer service using {service-name}.kafka.compression.type additional parameter in Platform configuration.

Value of this property is used for standard Kafka producer configuration – compression.type.
Note: Refer https://kafka.apache.org/documentation/#producerconfigs for standard Kafka producer configurations.
Kafka transaction configuration - Transactions help to prevent duplicate messages in target topics. Set kafka.transactions.enable property to true to enable transactions. Following prerequisites & limitations must be considered before turning the transactions on –
  • Consumer of target topic must ensure to read committed messages only by setting isolation level to “read_committed”. For example, when Content Integration Framework is configured to use transactions for publishing messages on topic “output”, then the subsequent Kafka consumer of “output” topic must use “read_committed” isolation level.
  • All topics must belong to the same Kafka cluster if transactions are used. Therefore, when kafka.transactions.enabled flag is set to true, Content Integration Framework uses the global Kafka configurations made under top level Content Integration node in Platform configuration. Any other system level Kafka configuration is ignored.

    kafka.transactions.enabled is set to false by default.