tHiveInput properties for Apache Spark Batch
These properties are used to configure tHiveInput running in the Spark Batch Job framework.
The Spark Batch tHiveInput component belongs to the Databases family.
The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.
Basic settings
Property Type |
Select the way the connection details will be set.
|
Hive Storage Configuration |
Select the tHiveConfiguration component from which you want Spark to use the configuration details to connect to Hive. If you are running your Spark Job on Spark Universal with Yarn Cluster mode, this property is not available. The Hive storage configuration directly comes from the XML files inside the Hadoop configuration JAR file. For more information, see Defining Yarn cluster connection parameters with Spark Universal. |
HDFS Storage configuration |
Select the tHDFSConfiguration component from which you want Spark to use the configuration details to connect to a given HDFS system and transfer the dependent jar files to this HDFS system. This field is relevant only when you are using an on-premises distribution. If you are running your Spark Job on Spark Universal with Yarn Cluster mode, this property is not available. The HDFS storage configuration directly comes from the XML files inside the Hadoop configuration JAR file. For more information, see Defining Yarn cluster connection parameters with Spark Universal. |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Always use lowercase when naming a field because the processing behind the scene could force the field names to be lowercase. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
|
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
Input source |
Select the type of the input data you want tHiveInput to read:
For further information, see the Hive query language manual. Information noteNote: Compressed data in the form of Gzip or Bzip2 can be processed through the
query statements. For details, see Compressed Data Storage.
Hadoop provides different compression formats that help reduce the space needed for storing files and speed up data transfer. When reading a compressed file, Talend Studio needs to extract it before being able to feed it to the input flow. |
Advanced settings
Register Hive UDF jars |
Add the Hive user-defined function (UDF) jars you want tHiveInput to use. Note that you must define a function alias for each UDF to be used in the Temporary UDF functions table. Once you add one row to this table, click it to display the [...] button and then click this button to display the jar import wizard. Through this wizard, import the UDF jar files you want to use. A registered function is often used in a Hive query that you edit in the Hive Query field in the Basic settings view. Note that this Hive Query field is displayed only when you select Hive query from the Input source list. |
Temporary UDF functions |
Complete this table to give each imported UDF class a temporary function name to be used in the Hive query in the current tHiveInput component. |
Usage
Usage rule |
This component is used as a start component and requires an output link. This component should use a tHiveConfiguration component present in the same Job to connect to Hive. This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |