tKinesisOutput properties for Apache Spark Streaming
These properties are used to configure tKinesisOutput running in the Spark Streaming Job framework.
The Spark Streaming tKinesisOutput component belongs to the Messaging family.
The streaming version of this component is available in Talend Real-Time Big Data Platform and in Talend Data Fabric.
Basic settings
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. The schema of this component is read-only. You can click Edit schema to view the schema. The read-only serializedValue column is used to carry the body of the message to be added to Kinesis. Note that you must use a Write component such as tWriteJSONField to define a same serializedValue column in the input schema in order to send serialized data to this read-only column. The other columns are automatically retrieved from the schema of its preceding component. They are added as header to the message to be outputted. |
Access key |
Enter the access key ID that uniquely identifies an AWS Account. For further information about how to get your Access Key and Secret Key, see Getting Your AWS Access Keys. |
Secret key |
Enter the secret access key, constituting the security credentials in combination with the access Key. To enter the password, click the [...] button next to the password field, enter the password in double quotes in the pop-up dialog box, and click OK to save the settings. |
Stream name |
Enter the name of the Kinesis stream you want to add data to. |
Endpoint URL |
Enter the endpoint of the Kinesis service to be used. For example, https://kinesis.us-east-1.amazonaws.com. More valid Kinesis endpoint URLs can be found at http://docs.aws.amazon.com/general/latest/gr/rande.html#ak_region. |
Number of shard |
Enter the number of partitions (shards in terms of Kinesis) to be created in the target Kinesis stream. |
Advanced settings
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control the number of connections that stay open simultaneously. The default values given to the following connection pool parameters are good enough for most use cases.
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The following fields are displayed once you have selected it.
|
Usage
Usage rule |
This component is used as an end component and requires an input link. This component needs a Write component such as tWriteJSONField to define a serializedValue column in the input schema to send serialized data. This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Limitation |
Due to license incompatibility, one or more JARs required to use this component are not provided. You can install the missing JARs for this particular component by clicking the Install button on the Component tab view. You can also find out and add all missing JARs easily on the Modules tab in the Integration perspective of Talend Studio. For details, see Installing external modules. |