Setting general connection properties
This section describes how to configure general connection properties. For an explanation of how to configure advanced connection properties, see Setting advanced connection properties.
To define the general connection properties:
-
Click the Manage Endpoint Connections toolbar button.
The Manage Endpoints Connections dialog box opens.
-
Click the New Endpoint Connection toolbar button.
The Name, Description, Type and Role fields are displayed on the right.
- In the Name field, specify a display name for the endpoint.
- In the Description field, optionally type a description for the Kafka endpoint.
- Select Target as the endpoint Role.
-
Select Amazon Kinesis Data Streams as the endpoint Type.
The dialog box is divided into General and Advanced tabs.
-
In the Access Details section, set the following properties:
-
Region: Your Amazon Kinesis Data Streams region. If your region does not appear in the regions list, select Other and set the code using the regionCode internal parameter in the endpoint’s Advanced tab.
For a list of region codes, see the Region availability section in:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html
- Access options: Choose one of the following:
Key pair
Choose this method to authenticate with your Access Key and Secret Key.
IAM Roles for EC2
Choose this method if the machine on which Qlik Replicate is installed is configured to authenticate itself using an IAM role.
For information on IAM roles, see:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
- Access key: If you selected Key pair as your access method, enter your access key for Amazon Kinesis Data Streams.
- Secret key: If you selected Key pair as your access method, enter your secret key for Amazon Kinesis Data Streams.
-
-
In the Message Properties section, select JSON or Avro as the message Format.
Information noteQlik provides an Avro Message Decoder SDK for consuming Avro messages produced by Qlik Replicate. You can download the SDK as follows:
-
Go to Product Downloads.
-
Select Qlik Data Integration.
-
Scroll down the Product list and select Replicate.
-
In the Download Link column, locate the QlikReplicate_<version>_Avro_Decoder_SDK.zip file. Before starting the download, check the Version column to make sure that the version correlates with the Replicate version you have installed.
-
Proceed to download the QlikReplicate_<version>_Avro_Decoder_SDK.zip file.
For usage instructions, see Kafka Avro consumers API.
An understanding of the Qlik envelope schema is a prerequisite for consuming Avro messages produced by Qlik Replicate. If you do not wish to use the SDK, see The Qlik Envelope for a description of the Qlik envelope schema.
-
-
In the Data Message Publishing section, set the following properties:
-
In the Publish the data to field, choose one of the following:
-
Specific stream - to publish the data to a single stream. Either type a stream name or use the browse button to select the desired stream.
-
Separate stream for each table - to publish the data to multiple streams corresponding to the source table names.
The target stream name consists of the source schema name and the source table name, separated by a period (e.g. "dbo.Employees"). The format of the target stream name is important as you will need to prepare these streams in advance.
-
- From the Partition strategy drop-down list, field, select either Random or By Partition Key. If you select Random, each message will be written to a randomly selected partition. If you select By Partition Key, messages will be written to partitions based on the selected Partition key (described below).
-
From the Partition key drop-down list, field, select one of the following:
Information noteThe partition key is represented as a string, regardless of the selected data message format (JSON/Avro).
-
Schema and table name - For each message, the partition key will contain a combination of schema and table name (e.g. "
dbo+Employees
").Messages consisting of the same schema and table name will be written to the same partition.
-
Primary key columns - For each message, the partition key will contain the value of the primary key column.
Messages consisting of the same primary key value will be written to the same partition.
-
-
-
In the Metadata Message Publishing section, specify whether or where to publish the message metadata.
From the Publish drop-down list, select one of the following options:
-
Do not publish metadata messages.
-
Publish metadata messages to a dedicated metadata stream
If you select this option, either type the Specific stream name or use the Browse button to select the desired stream.
Information noteIt is strongly recommended not to publish metadata messages to the same stream as data messages.
-