tDynamoDBOutput properties for Apache Spark Batch
These properties are used to configure tDynamoDBOutput running in the Spark Batch Job framework.
The Spark Batch tDynamoDBOutput component belongs to the Databases family.
The component in this framework is available in all Talend products with Big Data and Talend Data Fabric.
Basic settings
Use an existing connection |
Select this check box and in the Component List drop-down list, select the desired connection component to reuse the connection details you already defined. |
Inherit credentials from AWS role |
Select this check box to leverage the instance profile credentials. These credentials can be used on Amazon EC2 instances, and are delivered through the Amazon EC2 metadata service. To use this option, your Job must be running within Amazon EC2 or other services that can leverage IAM Roles for access to resources. For more information, see Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances. Information noteNote: This option is available when Use an existing
connection is cleared.
|
Access Key |
Enter the access key ID that uniquely identifies an AWS Account. For further information about how to get your Access Key and Secret Key, see Getting Your AWS Access Keys. Information noteNote: This option is available when Use an existing
connection and Inherit credentials from AWS
role are cleared.
|
Secret Key |
Enter the secret access key, constituting the security credentials in combination with the access Key. To enter the secret key, click the [...] button next to the secret key field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings. Information noteNote: This option is available when Use an existing
connection and Inherit credentials from AWS
role are cleared.
|
Use End Point |
Select this check box and in the Server Url field displayed, specify the Web service URL of the DynamoDB database service. |
Region |
Specify the AWS region by selecting a region name from the list or entering a region between double quotation marks (e.g. "us-east-1") in the list. For more information about the AWS Region, see Regions and Endpoints. |
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.
|
|
Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
|
Table Name |
Specify the name of the table in which you need to write data. This table must already exist. |
Die on error |
Select the check box to stop the execution of the Job when an error occurs. |
Advanced settings
Throughput write percent |
Enter, without using quotation marks, the percentage (expressed in decimal) to be used of the write capacity pre-defined in Amazon. For further information about this write capacity, see Provision throughput for write. |
Advanced properties |
Add properties to define extra operations you need tDynamoDBOutput to perform when writing data. This table is present for future evolution of the component and using it requires the high-level knowledge of DynamoDB development. Currently, there are no interesting user configurable properties. |
Usage
Usage rule |
This component is used as an end component and requires an input link. This component should use a tDynamoDBConfiguration component present in the same Job to connect to a DynamoDB database. You need to drop a tDynamoDBConfiguration component alongside this component and configure the Basic settings of this component to use tDynamoDBConfiguration. This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |