tMapRStreamsInput properties for Apache Spark Streaming
These properties are used to configure tMapRStreamsInput running in the Spark Streaming Job framework.
The Spark Streaming tMapRStreamsInput component belongs to the Messaging family.
This component is available in Talend Real-Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Note that the schema of this component is read-only. It stores the messages sent from the message producer. |
Output type |
Select the type of the data to be sent to the next component. Typically, using String is recommended, because tMapRStreamsInput can automatically translate the MapR Streams byte[] messages into strings to be processed by the Job. However, in case that the format of MapR Streams messages is not known to tMapRStreamsInput, such as Protobuf, you can select byte and then use a Custom code component such as tJavaRow to deserialize the messages into strings so that the other components of the same Job can process these messages. |
Topic name |
Enter the name of the topic from which tMapRStreamsInput receives the feed of messages. You must enter the name of the stream to which this topic belongs. The syntax is path_to_the_stream:topic_name. |
Starting from |
Select the starting point from which the messages of a topic are consumed. In MapR Streams, the increasing ID number of a message is called offset. When a new consumer group starts, from this list, you can select beginning to start consumption from the oldest message of the entire topic, or select latest to wait for a new message. Note that the consumer group takes into account only the offset-committed messages to start from. Each consumer group has its own counter to remember the position of a message it has consumed. For this reason, once a consumer group starts to consume messages of a given topic, a consumer group recognizes the latest message only with regard to the position where this group stops the consumption, rather than to the entire topic. Based on this principle, the following behaviors can be expected:
|
Set number of records per second to read from each Kafka partition |
Enter this number in double quotation marks to limit the size of each batch to be sent for processing. For example, if you put 100 and the batch value you define in the Spark configuration tab is 2 seconds, the size from a partition for each batch is 200 messages. If you leave this check box clear, the component tries to read all the available messages in one second into one single batch before sending it, potentially resulting in Job that stops responding in case of a huge quantity of messages. |
Advanced settings
Consumer properties |
Add the MapR Streams consumer properties you need to customize to this table. |
Custom encoding |
You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list. This encoding is used by tMapRStreamsInput to decode the input messages. |
Usage
Usage rule |
This component is used as a start component and requires an output link. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio . The following list presents MapR related information for example.
For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using. |