Learn about HCL Link services, projects, flows, maps and schemas.
A HCL Link flow is a transactional data flow comprised of processing nodes.
This documentation describes how Flows can be run from REST API, listeners, or they can be scheduled.
A Map Node invokes an HCL Link map within a flow.
The documentation provided here defiens the main artifacts in HCL Link. It is necessar to understand what these artifacts are and the relationship between them.
Service definitions define REST services and the endpoints of a service.
A project is a container for HCL Link artifacts.
A Flow node invokes a sub-flow within a flow.
Source and Target Nodes provide HCL Link with an outside-in approach to developing integrations.
The Request Node has a single input request terminal and a single output response terminal.
Cache Read and Cache Write node functionality is described in this section.
The Clone Node has one input and two output terminals.
The Decision Node routes input data to the true, or false terminals depending upon the condition.
The Fail Node causes the flow execution to fail.
The Format Converter Node can be used to quickly convert data from one format to another.
This documentation describes the function and use of the JSON Read Node.
The JSON Transform node is used to transform JSON documents from one form to another.
The Java Node invokes a Java class, thus performing user defined functionality based on the properties specified for the Java class to act on the input.
The Join Node gathers the individual results and appends to a single output file or terminal.
The Log Node logs the raw data from the node input into the file and propagates the data from the input to the output terminal.
The Passthrough Node propagates data from the input to the output terminal.
The REST Client Node provides a simple and powerful way to access REST services.
The Route Node provides a way to route data conditionally to one or more outputs of the node. The node bases decisions by evaluating a condition for a flow variable and determining whether to send the data to output 1 or 2 or both, based on the result of the condition.
The Sleep Node suspends the execution of the flow for the specified number of milliseconds
A Split Node should be used when there is a need to split CSV data processing. This might occur when CSV data processing becomes excessively time consuming.
Flow terminals represent the inputs and outputs to a flow. Every node has input and/or ouptut terminals.
This documentation describes the options for defining the flow schedule.
A Flow that has a Map node as its first node, and uses a File adapter for an input, can enable that node's input to be a Watch.
Flow audits are a way to retrieve more verbose information about a flow instance.
Flow Variables are process data variables that go along with the flow execution, accessible to all nodes in the flow while it is being executed under flow executor/engine context.
Steps required to create a flow.
When you develop a flow in the Flow Designer you can run the flow directly from the Designer either in Link, or on a different runtime environment.
A HCL Link map defines how to generate data that complies to a specific schema. A map can have any number of inputs and outputs.
Schemas specify the format of data witin HCL Link maps and flows.
This documentation describes connections and actions.
Files are used in various places within HCL Link. HCL Link allows files to be uploaded to the HCL Linkserver.
Flow variables provide a means of passing information throughout a flow instance without requiring that the information to be passed through data links. flow variables are name-value pairs that persist, unless deleted.
Cache variables provide a means of processing information throughout a flow instance without requiring that the information to be passed through data links.