Real time processing in Opportunity Detect

Opportunity Detect offers three data source connectors for processing transactions as soon as they are received.

The type of data source connectors that you use for your workspace determines whether it is processed in batch or real time mode. For real time operation, use a Real time file, Queue, or Web service type of data source connector for transaction and Outcome data.

Web Service connector
The Web Service connector is a good choice when moderate volumes of data are expected. In tests, this connector has been shown to handle up to 3,500 transactions per second.

To use the Web Service data source connector, your organization must develop code to receive and send the transaction and Outcome data.

Queue connector
A Queue connector is a good choice when a high volume of transactions must be processed.

Queues handle spikes in demand more efficiently than the Web Service does. Also, unlike the Web Service, queues retain messages in the event of a network or machine failure, or if you deploy a new version of a configuration deployment.

To use the queue data source connector, your organization must develop code to send the transaction data, and you must install and maintain a queue server.

Real time file connector
The Real time file connector handles fixed width ASCII files, processing transactions as soon as a file is placed in a designated folder. After processing, the file is placed in a different designated folder.

If you have more than one deployment using the Real time file connector, ensure that a different folder is designated for the transaction files for each deployment, to prevent an condition where the file is moved before one of the deployments is finished with it.

Opportunity Detect does not support live updates to files in the input directory used with the real time file connector. Also, you should not use the same input directory for multiple workspaces, as this could lead to undesired behavior.

When you configure the file connector, you can eliminate duplicate transactions by applying a Bloom filter to fields that you specify. You can also set the period of time for which duplicates are ignored.

The Bloom filter operator detects duplicate transactions in a memory-efficient way. Use the Bloom filter in scenarios that require duplicate detection for large numbers of unique transactions over a period of time. False positives are an occasional side effect of compression and occur when a transaction is marked as a duplicate even though it is unique. You can configure this operator with the number of expected unique data entries and the probability of false positives. If you have a high number of expected unique data entries and a low probability of false positives, you need a higher amount of required memory.

This connector is particularly useful for Call Data Records (CDRs) used by the telco industry. The CDR must first be transformed from binary to ASCII format.

Data source connectors are mapped to data sources in two places. The default mappings are set when a server group is configured. However, these mappings are commonly changed when a deployment is configured.