tDynamoDBConfiguration properties for Apache Spark Streaming
These properties are used to configure tDynamoDBConfiguration running in the Spark Streaming Job framework.
The Spark Streaming tDynamoDBConfiguration component belongs to the Storage and the Databases families.
This component is available in Talend Real-Time Big Data Platform and Talend Data Fabric.
Basic settings
Access key |
Enter the access key ID that uniquely identifies an AWS Account. For further information about how to get your Access Key and Secret Key, see Getting Your AWS Access Keys. |
Secret key |
Enter the secret access key, constituting the security credentials in combination with the access Key. To enter the secret key, click the [...] button next to the secret key field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings. |
Region |
Specify the AWS region by selecting a region name from the list. For more information about the AWS Region, see Regions and Endpoints. |
Use End Point |
Select this check box and in the field displayed, specify the Web service URL of the DynamoDB database service. |
Advanced settings
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control the number of connections that stay open simultaneously. The default values given to the following connection pool parameters are good enough for most use cases.
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The following fields are displayed once you have selected it.
|
Usage
Usage rule |
This component is used with no need to be connected to other components. The configuration in a tDynamoDBConfiguration component applies only on the DynamoDB related components in the same Job. In other words, the DynamoDB components used in a child or a parent Job that is called via tRunJob cannot reuse this configuration. This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |