tPigStoreResult Standard properties
These properties are used to configure tPigStoreResult running in the Standard Job framework.
The Standard tPigStoreResult component belongs to the Big Data and the Processing families.
The component in this framework is available in all Talend products with Big Data and in Talend Data Fabric.
Basic settings
|
Property type |
Either Repository or Built-in. The Repository option allows you
to reuse the connection properties centrally stored under the
Hadoop cluster node of the
Repository tree. Once selecting
it, the
Otherwise, if you select Built-in, you need to manually set each of the parameters. |
|
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
|
|
|
Built-In: You create and store the schema locally for this component only. |
|
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
|
Use S3 endpoint |
Select this check box to write data into a given Amazon S3 bucket folder. Once this Use S3 endpoint check box is
selected, you need to enter the following parameters in the fields that appear:
|
|
Result folder URI |
Select the path to the result file in which data is stored. |
|
Remove result directory if exists |
Select this check box to remove an existing result directory.
Information noteNote:
This check box is disabled when you select HCatStorer from the Store function list. |
|
Store function |
Select a store function for data to be stored:
Note that when the file format to be used is PARQUET, you might be prompted to find the specific
PARQUET
jar file and install it into the Studio.
|
|
HCataLog Configuration |
Fill the following fields to configure HCataLog managed tables on HDFS (Hadoop distributed file system): Distribution and Version: Select the Hadoop distribution to which you have defined the connection in the tPigLoad component, used in the same Pig process of the active tPigStoreResult. If that tPigLoad component connects to a custom Hadoop distribution, you must select Custom for this tPigStoreResult component, too. Then the Custom jar table appears, in which, you need to add only the jar files required by the selected Store function. HCat metastore: Enter the location of the HCatalog's metastore, which is actually Hive's metastore. Database: The database in which tables are placed. Table: The table in which data is stored. Partition filter: Fill this field with the partition keys to list partitions by filter. Information noteNote:
HCataLog Configuration area is enabled only when you select HCatStorer from the Store function list. For further information about the usage of HCataLog, see https://cwiki.apache.org/confluence/display/Hive/HCatalog . For further information about the usage of Partition filter, see https://cwiki.apache.org/confluence/display/HCATALOG/Design+Document+-+Java+APIs+for+HCatalog+DDL+Commands. |
| HBase configuration |
This area is available to the HBaseStorage function. The parameters to be set are: Distribution and Version: Select the Hadoop distribution to which you have defined the connection in the tPigLoad component, used in the same Pig process of the active tPigStoreResult. If that tPigLoad component connects to a custom Hadoop distribution, you must select Custom for this tPigStoreResult component, too. Then the Custom jar table appears, in which, you need to add only the jar files required by the selected Store function. Zookeeper quorum: Type in the name or the URL of the Zookeeper service you use to coordinate the transaction between your Studio and your database. Note that when you configure the Zookeeper, you might need to explicitly set the zookeeper.znode.parent property to define the path to the root znode that contains all the znodes created and used by your database; then select the Set Zookeeper znode parent check box to define this property. Zookeeper client port: Type in the number of the client listening port of the Zookeeper service you are using. Table name: Enter the name of the HBase table you need to store data in. The table must exist in the target HBase. Row key column: Select the column used as the row key column of the HBase table. Store row key column to Hbase column: Select this check box to make the row key column an HBase column belonging to a specific column family. Mapping: Complete this table to map the columns of the table to be used with the schema columns you have defined for the data flow to be processed. The Column column of this table is automatically filled once you have defined the schema; in the Family name column, enter the column families you want to create or use to group the columns in the Column column. For further information about a column family, see Apache documentation at Column families. |
| Field separator |
Enter character, string or regular expression to separate fields for the transferred data. Information noteNote:
This field is enabled only when you select PigStorage from the Store function list. |
|
Sequence Storage configuration |
This area is available only to the SequenceFileStorage function. Since a SequenceFile record consists of binary key/value pairs, the parameters to be set are: Key column: Select the Key column of a key/value record. Value column Select the Value column of a key/value record. |
Advanced settings
|
Register jar |
Click the [+] button to add rows to the table and from these rows, browse to the jar files to be added. For example, in order to register a jar file called piggybank.jar, click the [+] button once to add one row, then click this row to display the [...] browse button, and click this button to browse to the piggybank.jar file following the Select Module wizard. |
| HBaseStorage configuration |
Add and set more HBaseStorage storer options in this table. The options are: loadKey: enter true to store the row key as the first column of the result schema, otherwise, enter false; gt: the minimum key value; lt: the maximum key value; gte: the minimum key value (included); lte: the maximum key value (included); limit: maxum number of rows to retrieve per region; caching: number of rows to cache; caster: the converter to use for writing values to HBase. For example, Utf8StorageConverter. |
|
Define the jars to register |
This check box appears when you are using tHCatStorer, while by default, you can leave it clear as the Studio registers the required jar files automatically. In case any jar file is missing, you can select this check box to display the Register jar for HCatalog table and set the correct path to that missing jar. |
|
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the Job level as well as at each component level. |
Global Variables
|
Global Variables |
ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box. A Flow variable functions during the execution of a component while an After variable functions after the execution of the component. To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it. For further information about variables, see Talend Studio User Guide. |
Usage
|
Usage rule |
This component is always used to end a Pig process and needs tPigLoad at the beginning of that chain to provide data This component reuses automatically the connection created by the tPigLoad component in that Pig process. Note that if you use Hortonworks Data Platform V2.0.0, the type of the operating system for running the distribution and a Talend Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same type of operating system in which the Hortonworks Data Platform V2.0.0 distribution you are using is run. |
|
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio . The following list presents MapR related information for example.
For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using. |
|
Limitation |
Knowledge of Pig scripts is required. If you select HCatStorer as the store function, knowledge of HCataLog DDL(HCataLog Data Definition Language, a subset of Hive Data Definition Language) is required. For further information about HCataLog DDL, see https://cwiki.apache.org/confluence/display/Hive/HCatalog. |
button appears, then you can click it to
display the list of the stored properties and from that list, select
the properties you need to use. Once done, the appropriate
parameters are automatically set