tHBaseInput Standard properties
These properties are used to configure tHBaseInput running in the Standard Job framework.
The Standard tHBaseInput component belongs to the Big Data and the Databases NoSQL families.
The component in this framework is available in all Talend products with Big Data and in Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally. Repository: Select the repository file where the properties are stored. |
Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view. For more information about setting up and storing database connection parameters, see Talend Studio User Guide. |
|
Use an existing connection |
Select this check box and in the Component List drop-down list, select the desired connection component to reuse the connection details you already defined. |
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:
|
HBase version |
Select the version of the Hadoop distribution you are using. The available options vary depending on the component you are using. |
Hadoop version of the distribution |
This list is displayed only when you have selected Custom from the distribution list to connect to a cluster not yet officially supported by the Studio. In this situation, you need to select the Hadoop version of this custom cluster, that is to say, Hadoop 1 or Hadoop 2. |
Zookeeper quorum |
Type in the name or the URL of the Zookeeper service you use to coordinate the transaction between your Studio and your database. Note that when you configure the Zookeeper, you might need to explicitly set the zookeeper.znode.parent property to define the path to the root znode that contains all the znodes created and used by your database; then select the Set Zookeeper znode parent check box to define this property. |
Zookeeper client port |
Type in the number of the client listening port of the Zookeeper service you are using. |
Use kerberos authentication |
If the database to be used is running with Kerberos security, select this
check box, then, enter the principal names in the displayed fields. You should be
able to find the information in the hbase-site.xml file of the
cluster to be used.
If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in which your Job actually runs, for example, on a Talend Jobserver. Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the username you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used. |
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
|
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
Set table Namespace mappings |
Enter the string to be used to construct the mapping between an Apache HBase table and a MapR table. For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables. |
Table name |
Type in the name of the table from which you need to extract columns. |
Define a row selection |
Select this check box and then in the Start row and the End row fields, enter the corresponding row keys to specify the range of the rows you want the current component to extract. Different from the filters you can set using Is by filter requiring the loading of all records before filtering the ones to be used, this feature allows you to directly select only the rows to be used. |
Mapping |
Complete this table to map the columns of the table to be used with the schema columns you have defined for the data flow to be processed. |
Die on error |
Select the check box to stop the execution of the Job when an error occurs. Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link. |
Advanced settings
tStatCatcher Statistics |
Select this check box to collect log data at the component level. |
Properties |
If you need to use custom configuration for your database, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override the corresponding ones used by the Studio. For example, you need to define the value of the dfs.replication property as 1 for the database configuration. Then you need to add one row to this table using the plus button and type in the name and the value of this property in this row. Information noteNote:
This table is not available when you are using an existing connection by selecting the Using an existing connection check box in the Basic settings view. |
Is by filter |
Select this check box to use filters to perform fine-grained data selection from your database, such as selection of keys, or values, based on regular expressions. Once selecting it, the Filter table that is used to define filtering conditions becomes available. This feature leverages filters provided by HBase and subject to constraints explained in Apache HBase documentation. Therefore, advanced knowledge of HBase is required to make full use of these filters. |
Logical operation |
Select the operator you need to use to define the logical relation between filters. This
available operators are:
|
Filter |
Click the button under this table to add as many rows as required, each row representing a
filter. The parameters you may need to set for a filter are:
|
Retrieve timestamps |
Select this check box to load the timestamps of an HBase column into the data flow.
|
Global Variables
Global Variables |
NB_LINE: the number of rows read by an input component or transferred to an output component. This is an After variable and it returns an integer. ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box. A Flow variable functions during the execution of a component while an After variable functions after the execution of the component. To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it. For further information about variables, see Talend Studio User Guide. |
Usage
Usage rule |
This component is a start component of a Job and always needs an output link. |
Prerequisites |
Before starting, ensure that you have met the Loopback IP prerequisites expected by your database. The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio . The following list presents MapR related information for example.
For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using. |