tHBaseDeleteRows properties for Apache Spark Batch
These properties are used to configure tHBaseDeleteRows running in the Spark Batch Job framework.
The Spark Batch tHBaseDeleteRows component belongs to the Databases family.
The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.
Basic settings
Storage Configuration |
Select the tHBaseConfiguration component from which the Spark system to be used reads the configuration information to connect to HBase. |
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally. Repository: Select the repository file where the properties are stored. |
Table name |
Type in the name of the HBase table to remove rows from. This table must already exist. |
Row key column |
Select a column from the drop-down list. The column will be used as the row key column of the HBase table. |
Custom Row Key |
Select this check box to use the customized row keys. Once selected, the corresponding field appears. Then type in the user-defined row key to index the rows of the HBase table being created. For example, you can type in "France"+Numeric.sequence("s1",1,1) to produce the row key series: France1, France2, France3 and so on. |
Die on HBase errors | Select this option to stop the execution of the Job when an HBase error occurs. |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box. A Flow variable functions during the execution of a component while an After variable functions after the execution of the component. To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it. For more information about variables, see Using contexts and variables. |
Usage
Usage rule |
This component can be used as a standalone component. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |