Taking a snapshot of data for export to a table or file or Kafka

Use the Snapshot process to capture data for export to a table or file or Kafka. Select the source of the values that you want to capture, and define the output table or a file or Kafka instance name with topic name for those values.


  1. Open a flowchart for editing.
  2. Drag the Snapshot process Camera from the palette to your flowchart.
  3. Connect one or more configured processes to provide input to the Snapshot process.
    Note: All of the cells that you select as input must have the same audience level.
  4. Double-click the Snapshot process in the flowchart workspace.

    The Snapshot process configuration dialog box opens and the Snapshot tab is open by default.

  5. Use the Snapshot tab to specify how to capture data.
    1. Use the Input list to specify which cells to use as the data source for the snapshot.
      Note: If the Snapshot process is not connected to a process that provides output cells, there are no cells to select from in the Input list. Also, the Multiple cells option is available only if the input process generates multiple cells.
    2. Use the Export to list to select a table or file for the Snapshot output.
      Note: You can test the Snapshot process by running the process with output exported to a temporary file that you can review.
      • You can select an existing table from the list
      • If the table that you want to use is not in the list, or if you want to output to an unmapped table, select Database table. Use the Specify database table dialog box to specify the table and database name. User variables are supported in the table name that you specify here.
      • You can select File to open the Specify output file dialog, so you can define how to output to a Flat file with data dictionary, Flat file based on existing data dictionary, or Delimited file.
      • If you want to create a new user table, select New mapped table from the Export to list. For instructions on mapping tables, see the Unica Campaign Administrator's Guide.
      • Choose [Extract table] if you want to export to an extract table, which has a UAC_EX prefix. An extract table persists so that users can continue to access it to perform operations such as profiling its fields.
      • Choose Kafka if you want to export data to configured Kafka server. On Kafka topic window, you can choose the Kafka instance name and use default or user given topic name. Snapshot refers nodes under Settings > Campaign > partitions > partition1 > Kafka > [KafkaInstanceNode] (except Journey node) to get Kafka server related information. User can change default topic name at the time of configuration. A topic is a category or feed name to which snapshot pb will publish/ export user selected data. Data will be exported in Key-Value format. Key will be null so that data distribution on all partitions of selected Kafka topic on selected Kafka instance..

        Value will include - comma separated, pair of, exported field name and its value. For more details, see Kafka Campaign Nodes Configuration details.

    3. Select an option to specify how updates to the output file or table are handled:
      • Append to existing data. Add the new information to the end of the table or file. If you select this option for a delimited file, labels are not exported as the first row. This is a best practice for database tables.
      • Replace all records. Remove any existing data from the table or file, and replace it with the new information.
      • Update records. Available only if you are exporting to a table. All fields that are specified for the snapshot are updated with the values from the current run of the process.
      • Create new file. Available only if you are exporting to a file. This option is selected by default if you are exporting to a file. Each time that you run the process, a new file is created with an underscore and digit appended to the file name (file_1, file_2, and so on).
      Note: Above options are not applicable if user choose to export data to Kafka server.
  6. To specify which fields to snapshot, use the controls to move selected fields from the Candidate fields list to the Fields to snapshot list. You can select multiple fields with Ctrl+Click or select a range of fields with Shift+Click.
    Note: To view the values in a field, select a field in the Candidate fields list and click Profile.
    • If you selected a table as the snapshot destination, the fields in that table appear in the Candidate fields list. You can automatically find matching fields by clicking the Match button. Fields with exact matches for the table field names are automatically added to the Fields to snapshot list. If there are multiple matching fields, the first match is taken. You can manually modify the pairings by using Remove << or Add >>.
    • To include generated fields, expand the list of Unica Campaign generated fields in the Candidate fields list, select a field, then use the controls to move the field to the Fields to snapshot list.
    • To work with derived fields, click the Derived fields button.
    • You can reorder the Fields to snapshot by selecting a field and clicking Up 1 or Down 1 to move it up or down in the list.
  7. To skip records with duplicate IDs or to specify the order in which records are output, click More to open the Advanced settings dialog.
    1. To remove duplicate IDs within the same input cell, select Skip records with Duplicate IDs. Then choose the criteria to determine which record to retain if duplicate IDs are found.
      For example, select MaxOf and Household_Income to specify that when duplicate IDs are found, Unica Campaign exports only the ID with the highest household income.
      Note: This option removes duplicates only within the same input cell. Your snapshot data can still contain duplicate IDs if the same ID displays in multiple input cells. To remove all duplicate IDs, use a Merge or Segment process upstream of the Snapshot process to purge duplicate IDs or create mutually exclusive segments.
    2. To sort the snapshot output, select the Order by check box, then select the field to sort by and the sort order.
      For example, select Last_Name and Ascending to sort IDs by surname in ascending order.
    3. Click OK.
  8. Use the General tab to set the following options:
    • Process name: Assign a descriptive name. The process name is used as the box label on the flowchart. It is also used in various dialogs and reports to identify the process.
    • Note: Use the Note field to explain the purpose or result of the process. The contents of this field appears when you rest your cursor over the process box in a flowchart.
  9. Click OK to save and close the configuration.


The process is now configured. You can test run the process to verify that it returns the results you expect.

Kafka Campaign Nodes Configuration details

Using template provided on configuration path Affinium|Campaign|partitions|partition[n]|Kafka|(KafkaTemplate) Kafka Instance can be added to produce data on it. Below table having detailed explanation for parameters to be set for Kafka Instance.

KafkaBrokerURL Kafka servers being used to export data. User can define more than one kafka server separated by comma. Example: IP-0A862D46:9092 OR,
CommunicationMechanism Specify the connection mechanism to connect to Kafka server.

Possible values: SASL_PLAINTEXT_SSL - Use this to connect to kafka with username/password and SSL enabled.

NO_SASL_PLAINTEXT_SSL - Use this to connect kafka with no authentication and no SSL.

SASL_PLAINTEXT - Use this to connect kafka with username and password only.

SSL - Use this to connect kafka without username/password but with SSL.


SASL mechanism used for client connections.

Possible values:

PLAIN: This is default value, use this if client connection is without Kerberos authentication.

GSSAPI: Use this if client connection is with Kerberos authentication.
KafkaKeyFile Specify the client key file if connection mechanism is using SSL. Example: /opt/Unica/Kafkakeys/client_key.pem
KafkaCertificateFile Specify the certificate file if connection mechanism is using SSL. Example: /opt/Unica/Kafkakeys/client_cert.pem
CertificateAuthorityFile It is signed certificate of Kafka Server, it is required when connection mechanism is using SSL. Example - /opt/Unica/Kafkakeys/ca-cert
UserForKafkaDataSource Marketing Platform user contains the datasource credentials for Kafka while connecting with username / password
KafkaDataSource DataSource containing the kafka user credentials.
TopicName Journeys designated topic for Campaign to push data to Journey. Required value - CAMPAIGN_PB. Please do not change this as it would send data to Kafka topic which is not being used in Journey.
NumberOfPartitions Number of partitions supports Kafka to hold user exported data.
NumberOfReplicas Each partition is replicated across a configurable number of servers for fault tolerance.
RetentionPeriodInSeconds The maximum time Kafka will retain messages exported over topic. Once retention period over Kafka clears all eligible exported messages.
SslKeyPasswordDataSource If KafkaKeyFile is password protected then please create a separated data source which will include that password. User name is not used so can be anything. Mention that data source name as value of this field.

When using GSSAPI i.e., Kerberos authentication, configure a service name that matches the primary name of the Brokers configured in the Broker JAAS file.

Example is: kafka
SaslKerberosKeytabPath When using GSSAPI i.e., Kerberos authentication, set path of the keytab file created for kafka client.
SaslKerberosPrincipal When using GSSAPI i.e., Kerberos authentication, set Kerberos principal created for kafka client.
Note: Refer section - Steps to generate client certificates to connect to Kafka from Campaign Administrator Guide for certificate generation steps.