tFlumeInput properties for Apache Spark Streaming
These properties are used to configure tFlumeInput running in the Spark Streaming Job framework.
The Spark Streaming tFlumeInput component belongs to the Messaging family.
The streaming version of this component is available in Talend Real Time Big Data Platform and in Talend Data Fabric.
Basic settings
Host and Port |
Enter the hostname and the port of the machine used as the sink (the data output point bound to the channel of a Flume agent) to receive data from Flume.
|
Type |
Select the approach to read data from Flume.
For further information about these two approaches, see https://spark.apache.org/docs/1.3.1/streaming-flume-integration.html. |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Built-In: You create and store the schema locally for this component only. Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. This read-only line column is used by tFlumeInput to automatically extract the body of an input Flume event and construct an RDD along with the other columns used to store the header of the same event. |
Advanced settings
Encoding |
Select the encoding from the list or select Custom and define it manually. This encoding is used by tFlumeInput to decode the input event arrays. |
Usage
Usage rule |
This component is used as a start component and requires an output link. At runtime, the tFlumeInput component keeps listening to the sink and reads new events once they are buffered in this sink. This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Limitation |
Due to license incompatibility, one or more JARs required to use this component are not provided. You can install the missing JARs for this particular component by clicking the Install button on the Component tab view. You can also find out and add all missing JARs easily on the Modules tab in the Integration perspective of your studio. For details, see Installing external modules. You can find more details about how to install external modules in Talend Help Center (https://help.talend.com). |