Example: deploying a connector for CI/CD

To deploy connector customizations for use in a Continuous Integration/Continuous Delivery (CI/CD) pipeline, package your changes using a standardized data structure.

This example assumes you have created a new connector using the procedure described in Creating a new connector using the Ingest service. In this example we are creating a new pipeline for which the descriptor is in the pipes/new/auth.xfield.json file.
{
    "name": "auth.xfields",
    "description": "This is the connector used for the custom fields to ingest",
    "pipes": [
        {
            "name": "_Template-DatabasePagingETL",
            "properties": [
                {
                    "name": "Database Driver Location(s)",
                    "value": "${AUTH_JDBC_DRIVER_LOCATION}",
                    "scope": {
                        "name": "Database Connection Pool",
                        "type": "CONTROLLER_SERVICE"
                    }
                },
                {
                    "name": "Database Driver Class Name ",
                    "value": "${AUTH_JDBC_DRIVER_CLASSNAME}",
                    "scope": {
                        "name": "Database Connection Pool",
                        "type": "CONTROLLER_SERVICE"
                    }
                },
                {
                    "name": "Database Connection URL",
                    "value": "${AUTH_JDBC_URL}",
                    "scope": {
                        "name": "Database Connection Pool",
                        "type": "CONTROLLER_SERVICE"
                    }
                },
                {
                    "name": "Database User",
                    "value": "${AUTH_JDBC_USER_NAME}",
                    "scope": {
                        "name": "Database Connection Pool",
                        "type": "CONTROLLER_SERVICE"
                    }
                },
                {
                    "name": "Password",
                    "value": "${AUTH_JDBC_USER_PASSWORD}",
                    "scope": {
                        "name": "Database Connection Pool",
                        "type": "CONTROLLER_SERVICE"
                    }
                }
            ]
        },
        {
            "name": "Terminal",
            "label": "Terminal"
        }
    ]
}

New files

After the new pipe is created, perform two updates to the properties in the pipeline. This is done with the files pipes/updates/1_auth.xfields and pipes/updates/2_auth.xfields.
  • 1_auth.xfields:
    {
          "name": "auth.xfields",
          "pipes": [
                {
                      "name": "SCROLL SQL",
                      "properties": [
                            {
                                  "name": "properties.ingest.database.sql",
                                  "value": "	SELECT field1, field2, catgroup_id FROM  catgroup  ${paging.prefix} ${param.offset} ${paging.link} ${param.pageSize} ${paging.suffix} ",
                                  "scope": {
                                        "name": "SCROLL SQL.Define custom SQL",
                                        "type": "PROCESSOR"
                                  }
                            }
                      ]
                }
          ]
    }
    
  • 2_auth.xfields:
    {
          "name": "auth.xfields",
          "pipes": [
                {
                      "name": "Map Index Fields From Database",
                      "properties": [
                            {
                                  "name": "FIELD1",
                                  "value": "custom.x_field1.raw, custom.x_field1.normalized",
                                  "scope": {
                                        "name": "SCROLL SQL.Transform Document - Map Index Fields From Database",
                                        "type": "PROCESSOR"
                                  }
    							  "name": "FIELD2",
                                  "value": "custom.x_field2.raw, custom.x_field2.normalized",
                                  "scope": {
                                        "name": "SCROLL SQL.Transform Document - Map Index Fields From Database",
                                        "type": "PROCESSOR"
                                  }
                            }
                      ]
                }
          ]
    }
    
Your new files are kept in an appropriate file repository, such as git. The structure in this case is a simple hierarchy under the root directory pipes.
pipes
│
├───new
│       auth.xfields.json
│
└───updates
        1_auth.xfields
        2_auth.xfields

For more information about these files, see Configuring the connector/pipeline in NiFi.

Build

During the build stage, extract all the required resources from your file repository and package the files in an archive, using the approach described in Example .sra file structure for CI/CD automation.. The structure of the .sra file may directly mirror your repository's, or it might not; where the purpose of the repository structure was to organize your files for software development, the purpose of the archive's structure is to make the required resources as portable as possible. In this example we are including a META-INF directory to contain additional information about the contents of the archive.
Ingest-pipeline.sra

├───META-INF
├───pipes
       ├───new
       │	  auth.xfield.json
       └───updates
                1_auth.xfield.json
                2_auth.xfield.json

Having gathered the files into the .sra file, copy the file to the directory where it will be included within the deployment. From this location you will be building a new container, deploying the resources, and then pushing the custom container into a container registry.

Build a custom image

You are now ready to build a new Ingest image with your custom code.
  1. Copy the Ingest-pipeline.sra file to a working directory in your production environment.
  2. Extract the Ingest-pipeline.sra file.
  3. Copy the files under /new to
    profile/apps/search-ingest.ear/search-ingest.war/WEB-INF/classes/deployments/commerce
  4. Copy the files under /updates to
    /profile/apps/search-ingest.ear/search-ingest.war/WEB-INF/classes/deployments/customization
  5. Build your customi container.
  6. After the container is built, push it to the Registry and deploy it.