Skip to main content Skip to complementary content

tMapRDBOutput MapReduce properties (deprecated)

Availability-noteDeprecated

These properties are used to configure tMapRDBOutput running in the MapReduce Job framework.

The MapReduce tMapRDBOutput component belongs to the MapReduce and the Databases families.

The component in this framework is available in all Talend products with Big Data and Talend Data Fabric.

Availability-noteDeprecated
The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

Distribution and Version

Select the MapR distribution to be used. Only MapR V5.2 onwards is supported by the MapRDB components.

If the distribution you need to use with your MapRDB database is not officially supported by this MapRBD component, that is to say, this distribution is MapR but is not listed in the Version drop-down list of this components or this distribution is not MapR at all, select Custom.

  1. Select Import from existing version to import an officially supported distribution as base and then add other required jar files which the base distribution does not provide.

  2. Select Import from zip to import the configuration zip for the custom distribution to be used. This zip file should contain the libraries of the different Hadoop elements and the index file of these libraries.

    Note that custom versions are not officially supported by Talend . Talend and its community provide you with the opportunity to connect to custom versions from the Studio but cannot guarantee that the configuration of whichever version you choose will be easy, due to the wide range of different Hadoop distributions and versions that are available. As such, you should only attempt to set up such a connection if you have sufficient Hadoop experience to handle any issues on your own.

    Information noteNote:

    In this dialog box, the active check box must be kept selected so as to import the jar files pertinent to the connection to be created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom distribution and share this connection, see Hortonworks.

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction between your Studio and your database. Note that when you configure the Zookeeper, you might need to explicitly set the zookeeper.znode.parent property to define the path to the root znode that contains all the znodes created and used by your database; then select the Set Zookeeper znode parent check box to define this property.

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are using.

Use kerberos authentication

If the database to be used is running with Kerberos security, select this check box, then, enter the principal names in the displayed fields. You should be able to find the information in the hbase-site.xml file of the cluster to be used.
  • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the MapR ticket authentication configuration in addition or as an alternative by following the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username defined in the Job in each execution. If you need to reuse an existing ticket issued for the same username, leave both the Force MapR ticket authentication check box and the Use Kerberos authentication check box clear, and then MapR should be able to automatically find that ticket on the fly.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in which your Job actually runs, for example, on a Talend Jobserver.

Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the username you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

Schema et Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Table name

Type in the name of the table in which you need to write data. This table must already exist.

Table Namespace mappings

Enter the string to be used to construct the mapping between an Apache HBase table and a MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

Row key column

Select the column used as the row key column of the table.

Then if needs be, select the Store row key column to HBase column check box to make the row key column a column belonging to a specific column family.

Families

Complete this table to map the columns of the table to be used with the schema columns you have defined for the data flow to be processed.

The Column column of this table is automatically filled once you have defined the schema; in the Family name column, enter the column families you want to create or use to group the columns in the Column column. For further information about a column family, see Apache documentation at Column families.

Advanced settings

Properties

If you need to use custom configuration for your database, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override the corresponding ones used by the Studio.

For example, you need to define the value of the dfs.replication property as 1 for the database configuration. Then you need to add one row to this table using the plus button and type in the name and the value of this property in this row.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Usage rule

In a Talend Map/Reduce Job, it is used as an end component and requires a transformation component as input link. The other components used along with it must be Map/Reduce components, too. They generate native Map/Reduce code that can be executed directly in Hadoop.

The Hadoop configuration you use for the whole Job and the Hadoop distribution you use for this component must be the same. Actually, this component requires that its Hadoop distribution parameter be defined separately so as to launch its database driver only when that component is used.

Hadoop Connection

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

This connection is effective on a per-Job basis.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by your database.

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio . The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!