tPigLoad Standard properties
These properties are used to configure tPigLoad running in the Standard Job framework.
The Standard tPigLoad component belongs to the Big Data and the Processing families.
The component in this framework is available in all Talend products with Big Data and in Talend Data Fabric.
Basic settings
|
Property type |
Either Built-In or Repository. |
|
|
Built-In: No property data stored centrally. |
|
|
Repository: Select the repository file where the properties are stored. The properties are stored centrally under the Hadoop Cluster node of the Repository tree. The fields that come after are pre-filled in using the fetched data. For further information about the Hadoop Cluster node, see the Getting Started Guide. |
|
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
|
|
|
Built-In: You create and store the schema locally for this component only. |
|
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
|
Local |
Click this radio button to run Pig scripts in Local mode. In this mode, all files are installed and run from your local host and file system. |
|
Execution engine |
Select the framework you need to use to perform your Pig Job. This Tez mode is available only when you are using one of the following distributions:
Before using Tez, ensure that the Hadoop cluster you are using supports Tez. You will need to configure the access to the relevant Tez libraries via the Advanced settings view of this component. For further information about Pig on Tez, see Apache's related documentation in https://cwiki.apache.org/confluence/display/PIG/Pig+on+Tez. |
|
Distribution and Version |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:
The other parameters related to the connection to your cluster could be:
|
|
WebHCat configuration |
Enter the address and the authentication information of the Microsoft HD Insight cluster to be used. For example, the address could be your_hdinsight_cluster_name.azurehdinsight.net and the authentication information is your Azure account name: ychen. The Studio uses this service to submit the Job to the HD Insight cluster. In the Job result folder field, enter the location in which you want to store the execution result of a Job in the Azure Storage to be used. |
|
HDInsight configuration |
|
|
Windows Azure Storage configuration |
Enter the address and the authentication information of the Azure Storage account to be used. In this configuration, you do not define where to read or write your business data but define where to deploy your Job only. Therefore always use the Azure Storage system for this configuration. In the Container field, enter the name of the container to be used. You can find the available containers in the Blob blade of the Azure Storage account to be used. In the Deployment Blob field, enter the location in which you want to store the current Job and its dependent libraries in this Azure Storage account. |
|
Inspect the classpath for configurations |
Select this check box to allow the component to check the configuration files in the directory you have set with the $HADOOP_CONF_DIR variable and directly read parameters from these files in this directory. This feature allows you to easily change the Hadoop configuration for the component to switch between different environments, for example, from a test environment to a production environment. In this situation, the fields or options used to configure Hadoop connection and/or Kerberos security are hidden. If you want to use certain parameters such as the Kerberos parameters but
these parameters are not included in these Hadoop configuration files, you need to
create a file called talend-site.xml and put this file into the
same directory defined with $HADOOP_CONF_DIR. This talend-site.xml file should read as follows:
The parameters read from these configuration files override the default ones used by the Studio. When a parameter does not exist in these configuration files, the default one is used. |
|
Load function |
Select a load function for data to be loaded:
Note that when the file format to be used is PARQUET, you might be prompted to find the specific
PARQUET
jar file and install it into the Studio.
|
|
Input file URI |
Fill in this field with the full local path to the input file.
Information noteNote:
This field is not available when you select HCatLoader from the Load function list or when you are using an S3 endpoint. |
|
Use S3 endpoint |
Select this check box to read data from a given Amazon S3 bucket folder. Once this Use S3 endpoint check box is
selected, you need to enter the following parameters in the fields that appear:
|
|
HCataLog Configuration |
Fill the following fields to configure HCataLog managed tables on HDFS (Hadoop distributed file system): Distribution and Version: Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:
HCat metastore: Enter the location of the HCatalog's metastore, which is actually Hive's metastore, a system catalog. For further information about Hive and HCatalog, see http://hive.apache.org/. Database: The database in which tables are placed. Table: The table in which data is stored. Partition filter: Fill this field with the partition keys to list partitions by filter. Information noteNote:
HCataLog Configuration area is enabled only when you select HCatLoader from the Load function list. For further information about the usage of HCataLog, see https://cwiki.apache.org/confluence/display/Hive/HCatalog. For further information about the usage of Partition filter, see https://cwiki.apache.org/confluence/display/HCATALOG/Design+Document+-+Java+APIs+for+HCatalog+DDL+Commands. |
| Field separator |
Enter character, string or regular expression to separate fields for the transferred data. Information noteNote:
This field is enabled only when you select PigStorage from the Load function list. |
|
Compression |
Select the Force to compress the output data check box to compress the data when the data is outputted by tPigStoreResult at the end of a Pig process. Hadoop provides different compression formats that help reduce the space needed for storing files and speed up data transfer. When you need to write and compress data using the Pig program, by default you have to add a compression format as a suffix to the path pointing to the folder in which you want to write data, for example, /user/ychen/out.bz2. However, if you select this check box, the output data will be compressed even if you do not add any compression format to that path, such as /user/ychen/out. Information noteNote: The output path is set in the Basic settings
view of tPigStoreResult.
|
| HBase configuration |
This area is available to the HBaseStorage function. The parameters to be set are: Zookeeper quorum: Type in the name or the URL of the Zookeeper service you use to coordinate the transaction between your Studio and your database. Note that when you configure the Zookeeper, you might need to explicitly set the zookeeper.znode.parent property to define the path to the root znode that contains all the znodes created and used by your database; then select the Set Zookeeper znode parent check box to define this property. Zookeeper client port: Type in the number of the client listening port of the Zookeeper service you are using. Table name: Enter the name of the HBase table you need to load data from. Load key: Select this check box to load the row key as the first column of the result schema. In this situation, you must have created this column in the schema. Mapping: Complete this table to map the columns of the table to be used with the schema columns you have defined for the data flow to be processed. |
|
Sequence Loader configuration |
This area is available only to the SequenceFileLoader function. Since a SequenceFile record consists of binary key/value pairs, the parameters to be set are: Key column: Select the Key column of a key/value record. Value column Select the Value column of a key/value record. |
|
Die on subJob error |
This check box is cleared by default, meaning to skip the row on subJob error and to complete the process for error-free rows. |
Advanced settings
|
Tez lib |
Select how the Tez libraries are accessed:
|
| Hadoop Properties |
Talend Studio
uses a default configuration for its engine to perform
operations in a Hadoop distribution. If you need to use a custom configuration in a specific
situation, complete this table with the property or properties to be customized. Then at
runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you
are using or see Apache's Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
|
|
Register jar |
Click the [+] button to add rows to the table and from these rows, browse to the jar files to be added. For example, in order to register a jar file called piggybank.jar, click the [+] button once to add one row, then click this row to display the [...] browse button, and click this button to browse to the piggybank.jar file following the Select Module wizard. |
| Define functions |
Use this table to define UDFs (User-Defined Functions), especially those requiring alias such as Apache DataFu Pig functions, to be executed when loading data. Click the
If your Job includes a tPigMap component, once you have defined UDFs for this component in the tPigMap, this table is automatically filled. Likewise, once you have defined UDFs in this table, the Define functions table in the tPigMap component's Map Editor is automatically filled. For information on how to define UDFs when mapping Pig flows, see the section on mapping Big Data flows of the Talend Open Studio for Big Data Getting Started Guide . For more information on Apache DataFu Pig, see http://datafu.incubator.apache.org/. |
|
Pig properties |
Talend Studio uses a default configuration for its Pig engine to perform operations. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones. For example, the default_parallel key used in Pig could be set as 20. |
| HBaseStorage configuration |
Add and set more HBaseStorage loader options in this table. The options are: gt: the minimum key value; lt: the maximum key value; gte: the minimum key value (included); lte: the maximum key value (included); limit: maximum number of rows to retrieve per region; caching: number of rows to cache; caster: the converter to use for reading values out of HBase. For example, HBaseBinaryConverter. |
|
Define the jars to register for HCatalog |
This check box appears when you are using tHCatLoader, while you can leave it clear as the Studio registers the required jar files automatically. In case any jar file is missing, you can select this check box to display the Register jar for HCatalog table and set the correct path to that missing jar. |
|
Path separator in server |
Leave the default value of the Path separator in server as it is, unless you have changed the separator used by your Hadoop distribution's host machine for its PATH variable or in other words, that separator is not a colon (:). In that situation, you must change this value to the one you are using in that host. |
|
Mapred job map memory mb and Mapred job reduce memory mb |
You can tune the map and reduce computations by selecting the Set memory check box to set proper memory allocations for the computations to be performed by the Hadoop system. In that situation, you need to enter the values you need in the Mapred job map memory mb and the Mapred job reduce memory mb fields, respectively. By default, the values are both 1000 which are normally appropriate for running the computations. The memory parameters to be set are Map (in Mb), Reduce (in Mb) and ApplicationMaster (in Mb). These fields allow you to dynamically allocate memory to the map and the reduce computations and the ApplicationMaster of YARN. |
|
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the Job level as well as at each component level. |
Global Variables
|
Global Variables |
ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box. A Flow variable functions during the execution of a component while an After variable functions after the execution of the component. To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it. For further information about variables, see Talend Studio User Guide. |
Usage
|
Usage rule |
This component is always used to start a Pig process and needs tPigStoreResult at the end to output its data. In the Map/Reduce mode, you need only configure the Hadoop connection for the first tPigLoad component of a Pig process (a subJob), and any other tPigLoad component used in this process reuses automatically that connection created by that first tPigLoad component. |
|
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio . The following list presents MapR related information for example.
For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using. |
|
Limitation |
Knowledge of Pig scripts is required. If you select HCatLoader as the load function, knowledge of HCataLog DDL(HCataLog Data Definition Language, a subset of Hive Data Definition Language) is required. For further information about HCataLog DDL, see https://cwiki.apache.org/confluence/display/Hive/HCatalog. |
button to add as many rows as you need and
specify an alias and a UDF in the relevant fields for each row.