Skip to main content

Setting general connection properties

This section describes how to configure general connection properties. For an explanation of how to configure advanced connection properties, see Setting advanced connection properties.

To define the general connection properties:

  1. Click the Manage Endpoint Connections toolbar button.

    The Manage Endpoints Connections dialog box opens.

  2. Click the New Endpoint Connection toolbar button.

    The Name, Description, Type and Role fields are displayed on the right.

  3. In the Name field, specify a display name for the endpoint.
  4. In the Description field, optionally type a description for the Kafka endpoint.
  5. Select Target as the endpoint Role.
  6. Select Amazon Kinesis Data Streams as the endpoint Type.

    The dialog box is divided into General and Advanced tabs.

  7. In the Access Details section, set the following properties:

    • Region: Your Amazon Kinesis Data Streams region. If your region does not appear in the regions list, select Other and specify the Region code (for example, eu-west-1).

       

      For a list of region codes, see AWS Regions.

    • Use AWS PrivateLink

      Select this to connect to an Amazon VPC and then specify the VPC Endpoint URL (for example, https://vpce-1a9e4d98314b21cf4-xs5xq7uu.kinesis.eu-west-1.vpce.amazonaws.com).

    • Access options: Choose one of the following:
      • Key pair

        Choose this method to authenticate with your Access Key and Secret Key.

      • IAM Roles for EC2

        Choose this method if the machine on which Qlik Replicate is installed is configured to authenticate itself using an IAM role.

        For information on IAM roles, see IAM roles.

    • Access key: If you selected Key pair as your access method, enter your access key for Amazon Kinesis Data Streams.
    • Secret key: If you selected Key pair as your access method, enter your secret key for Amazon Kinesis Data Streams.
  8. In the Message Properties section, select JSON or Avro as the message Format.

    Information note

    Qlik provides an Avro Message Decoder SDK for consuming Avro messages produced by Qlik Replicate. You can download the SDK as follows:

    1. Go to Product Downloads.

    2. Select Qlik Data Integration.

    3. Scroll down the Product list and select Replicate.

    4. In the Download Link column, locate the QlikReplicate_<version>_Avro_Decoder_SDK.zip file. Before starting the download, check the Version column to make sure that the version correlates with the Replicate version you have installed.

    5. Proceed to download the QlikReplicate_<version>_Avro_Decoder_SDK.zip file.

    For usage instructions, see Kafka Avro consumers API.

    An understanding of the Qlik envelope schema is a prerequisite for consuming Avro messages produced by Qlik Replicate. If you do not wish to use the SDK, see The Qlik Envelope for a description of the Qlik envelope schema.

  9. In the Data Message Publishing section, set the following properties:

    1. In the Publish the data to field, choose one of the following:

      • Specific stream - to publish the data to a single stream. Either type a stream name or use the browse button to select the desired stream.

      • Separate stream for each table - to publish the data to multiple streams corresponding to the source table names.

        The target stream name consists of the source schema name and the source table name, separated by a period (e.g. "dbo.Employees"). The format of the target stream name is important as you will need to prepare these streams in advance.

    2. From the Partition strategy drop-down list, field, select either Random or By Partition Key. If you select Random, each message will be written to a randomly selected partition. If you select By Partition Key, messages will be written to partitions based on the selected Partition key (described below).
    3. From the Partition key drop-down list, field, select one of the following:

      Information note

      The partition key is represented as a string, regardless of the selected data message format (JSON/Avro).

      • Schema and table name - For each message, the partition key will contain a combination of schema and table name (e.g. "dbo+Employees").

        Messages consisting of the same schema and table name will be written to the same partition.

      • Primary key columns - For each message, the partition key will contain the value of the primary key column.

        Messages consisting of the same primary key value will be written to the same partition.

  1. In the Metadata Message Publishing section, specify whether or where to publish the message metadata.

    From the Publish drop-down list, select one of the following options:

    • Do not publish metadata messages.

    • Publish metadata messages to a dedicated metadata stream

      If you select this option, either type the Specific stream name or use the Browse button to select the desired stream.

      Information note

      It is strongly recommended not to publish metadata messages to the same stream as data messages.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!