Skip to main content Skip to complementary content

tHBaseDeleteRows properties for Apache Spark Batch

These properties are used to configure tHBaseDeleteRows running in the Spark Batch Job framework.

The Spark Batch tHBaseDeleteRows component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.

Information noteNote: This component is available only when you have installed the 8.0.1-R2023-04 Talend Studio Monthly update or a later one delivered by Talend. For more information, check with your administrator.

Basic settings

Storage Configuration

Select the tHBaseConfiguration component from which the Spark system to be used reads the configuration information to connect to HBase.

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

Table name

Type in the name of the HBase table to remove rows from. This table must already exist.

Row key column

Select a column from the drop-down list. The column will be used as the row key column of the HBase table.
Custom Row Key

Select this check box to use the customized row keys. Once selected, the corresponding field appears. Then type in the user-defined row key to index the rows of the HBase table being created.

For example, you can type in "France"+Numeric.sequence("s1",1,1) to produce the row key series: France1, France2, France3 and so on.

Die on HBase errors Select this option to stop the execution of the Job when an HBase error occurs.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

This component can be used as a standalone component.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!