Adapter Considerations

Link

  • Maps with cards that utilize IBM MQ or Database (Legacy) connection types do not run successfully in web UI. Such maps can be designed in the web UI but then need to be deployed to a runtime server and executed there.
  • Maps with cards that utilize one or more of Apache Kafka, Apache HDFS, JDBC, Amazon S3, Amazon SQS, Amazon SNS or Apache Active MQ connection types, and that also utilize configuration variables, will not have the configuration variables resolved when executed directly in the web UI. When deployed and executed on the runtime server, the configuration variables will be resolved.

Map Command Server

  • When running maps or flows that use one or more of Apache Kafka, Apache HDFS, JDBC, Amazon S3, Amazon SQS, Amazon SNS and Apache Active MQ connection types, the DTXHOME or DTX_HOME_DIR environment variable need to be set to point to the product install home directory. If the environment variable is not set, the Adapter not found message is reported when running the map. This requirement applies to all platforms on which the map command server and the flow command server are supported.

    For all Windows installations of m4base adapters, there is no need to set DTXHOME or DTX_HOME_DIR by running the DTXcommon.bat file.

  • When running maps and flows with JDBC adapter on a platform other than Windows, the drivers referenced by the connection URL will not be found if explicitly listed in the dtx.ini configuration file. They will be found if stored in the $DTX_HOME_DIR/libs/extra directory. It is still possible to reference the drivers in the dtx.ini, however, a symbolic link with the same name must be created in the $DTX_HOME_DIR directory to point to the dtx.ini file in the $DTX_HOME_DIR/config directory.

    For Linux installations of all m4base adapters, you still must run . ./setup and create extra directory under $DTX_HOME_DIR/libs.

Common to Link, Map Command Server and Flow Command Server

When the log file name is not specified for Apache Kafka, Apache HDFS, JDBC, Amazon S3, Amazon SQS, Amazon SNS and Apache Active MQ adapters, the log file with the default name is not created when it should be created. The workaround is to always specify the file name.

Note: When running maps in the Link UI, you must create a file artifact and select it as the log file path property value, to be able to download and inspect the log file after executing the map.