tKafkaInputAvro properties for Apache Spark Streaming
These properties are used to configure tKafkaInputAvro running in the Spark Streaming Job framework.
The Spark Streaming tKafkaInputAvro component belongs to the Messaging family.
This component is available in Talend Real-Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. |
Broker list |
Enter the addresses of the broker nodes of the Kafka cluster to be used. The form of this address should be hostname:port. This information is the name and the port of the hosting node in this Kafka cluster. If you need to specify several addresses, separate them using a comma (,). |
Starting offset |
Select the starting point from which the messages of a topic are consumed. In Kafka, the sequential ID number of a message is called offset. From this list, you can select From beginning to start consumption from the oldest message of the entire topic, or select From latest to start from the latest message that has been consumed by the same consumer group and of which the offset is tracked by Spark within Spark checkpoints. Note that in order to enable the component to remember the position of a consumed message, you need to activate the Spark Streaming checkpointing in the Spark Configuration tab in the Run view of the Job. Each consumer group has its own counter to remember the position of a message it has consumed. For this reason, once a consumer group starts to consume messages of a given topic, a consumer group recognizes the latest message only with regard to the position where this group stops the consumption, rather than to the entire topic. Based on this principle, the following behaviors can be expected:
|
Topic name |
Enter the name of the topic from which tKafkaInput receives the feed of messages. |
Group ID |
Enter the name of the consumer group to which you want the current consumer (the tKafkaInput component) to belong. This consumer group will be created at runtime if it does not exist at that moment. This property is available only when you are using Spark 2.0 or the Hadoop distribution to be used is running Spark 2.0. If you do not know the Spark version you are using, ask the administrator of your cluster for details. |
Set number of records per second to read from each Kafka partition |
Enter this number within double quotation marks to limit the size of each batch to be sent for processing. For example, if you put 100 and the batch value you define in the Spark configuration tab is 2 seconds, the size from a partition for each batch is 200 messages. If you leave this check box clear, the component tries to read all the available messages in one second into one single batch before sending it, potentially resulting in Job that stops responding in case of a huge quantity of messages. |
Use SSL/TLS |
Select this check box to enable the SSL or TLS encrypted connection. Then you need to use the tSetKeystore component in the same Job to specify the encryption information. This property is available only when you are using Spark 2.0 or the Hadoop distribution to be used is running Spark 2.0. If you do not know the Spark version you are using, ask the administrator of your cluster for details. The TrustStore file and any used KeyStore file must be stored locally on every single Spark node that is hosting a Spark executor. |
Use Kerberos authentication |
If the Kafka cluster to be used is secured with Kerberos, select this check box to display the related parameters to be defined:
For further information about how a Kafka cluster is secured with Kerberos, see Authenticating using SASL. This check box is available since Kafka 0.9.0.1. |
Use Schema Registry |
Select this check box to use Confluent Schema Registry and to display the
related parameters to be defined:
For more information about Schema Registry, see the Confluent documentation. This option is available when you have installed the 8.0.1-R2022-12 Talend Studio Monthly update or a later one delivered by Talend. For more information, check with your administrator. |
Advanced settings
Kafka properties |
Add the Kafka consumer properties you need to customize to this table. For example, you can set a specific zookeeper.connection.timeout.ms value to avoid ZkTimeoutException. For further information about the consumer properties you can define in this table, see the section describing the consumer configuration in Kafka's documentation in http://kafka.apache.org/documentation.html#consumerconfigs. |
Use hierarchical mode |
Select this check box to map the binary (including hierarchical) Avro schema to the flat schema defined in the schema editor of the current component. If the Avro message to be processed is flat, leave this check box clear. Once selecting it, you need set the following parameter(s):
|
Usage
Usage rule |
This component is used as a start component and requires an output link. This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. In the implementation of the current component in Spark, the Kafka offsets are automatically managed by Spark itself, that is to say, instead of being committed to ZooKeeper or Kafka, the offsets are tracked within Spark checkpoints. For more information about this implementation, see the Direct approach section in the Spark documentation. |