tKafkaOutput properties for Apache Spark Streaming
These properties are used to configure tKafkaOutput running in the Spark Streaming Job framework.
The Spark Streaming tKafkaOutput component belongs to the Messaging family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Note that the schema of this component is read-only. It stores the messages to be published. |
Broker list |
Enter the addresses of the broker nodes of the Kafka cluster to be used. The form of this address should be hostname:port. This information is the name and the port of the hosting node in this Kafka cluster. If you need to specify several addresses, separate them using a comma (,). |
Topic name |
Enter the name of the topic you want to publish messages to. This topic must already exist. |
Partition |
Enter the partition number to be used from the topic. Information noteNote: If you leave this field blank, the partition is selected
randomly.
|
Key |
Enter the name of the key to be used from the topic. Information noteNote: If you leave this field blank, the key to be used is the
null one.
|
Compress the data |
Select the Compress the data check box to compress the output data. |
Advanced settings
Kafka properties |
Add the Kafka new producer properties you need to customize to this table. For further information about the new producer properties you can define in this table, see the section describing the new producer configuration in Kafka's documentation in http://kafka.apache.org/documentation.html#newproducerconfigs. |
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control the number of connections that stay open simultaneously. The default values given to the following connection pool parameters are good enough for most use cases.
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The following fields are displayed once you have selected it.
|
Usage
Usage rule |
This component is used as an end component and requires an input link. This component needs a Write component such as tWriteJSONField to define a serializedValue column in the input schema to send serialized data. This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |