tMapRDBConfiguration properties for Apache Spark Batch
These properties are used to configure tMapRDBConfiguration running in the Spark Batch Job framework.
The Spark Batch tMapRDBConfiguration component belongs to the Storage and the Databases families.
The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally. Repository: Select the repository file where the properties are stored. The properties are stored centrally under the Hadoop Cluster node of the Repository tree. |
Distribution and Version |
Select the MapR distribution to be used. Only MapR V5.2 onwards is supported by the MapRDB components. If the distribution you need to use with your MapRDB database is not officially supported by this MapRBD component, that is to say, this distribution is MapR but is not listed in the Version drop-down list of this components or this distribution is not MapR at all, select Custom.
|
Zookeeper quorum |
Type in the name or the URL of the ZooKeeper service you use to coordinate the transaction between your Talend Studio and your database. Note that when you configure the ZooKeeper, you might need to explicitly set the zookeeper.znode.parent property to define the path to the root Znode that contains all the Znodes created and used by your database; then select the Set Zookeeper znode parent check box to define this property. |
Zookeeper client port |
Type in the number of the client listening port of the ZooKeeper service you are using. |
Use kerberos authentication |
If the database to be used is running with Kerberos security, select this check box, then, enter the principal names in the HBase Master principal and HBase Region Server principal fields. You should be able to find the information in the hbase-site.xml file of the cluster to be used. If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in which your Job actually runs, for example, on a Talend JobServer. Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the username you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used. For further information about how Kerberos can be configured for your database in a MapR cluster, see Configuring Kerberos Authentication. |
HBase Properties |
If you need to use custom configuration for your database, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those corresponding ones defined earlier for your database. For example, you need to define the value of the dfs.replication property as 1 for the database configuration. Then you need to add one row to this table using the plus button and type in the name and the value of this property in this row. |
Usage
Usage rule |
This component is used only with the other MapRDB components to provide MapR-DB connection to Spark. |
Prerequisites |
Before starting, ensure that you have met the Loopback IP prerequisites expected by your database. The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio . The following list presents MapR related information for example.
For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |