tHiveInput
Extracts data from Hive and sends the data to the component that follows.
tHiveInput is the dedicated component to the Hive database (the Hive data warehouse system). It can execute a given HiveQL query in order to extract the data from Hive.
When ACID is enabled on the Hive side, a Spark Job cannot delete or update a table and unless data is compacted, this Job cannot correctly read aggregated data from a Hive table, either. This is a known limitation described in the Spark bug tracking system: https://issues.apache.org/jira/browse/SPARK-15348.
For more technologies supported by Talend, see Talend components.
Depending on the Talend product you are using, this component can be used in one, some or all of the following Job frameworks:
-
Standard: see tHiveInput Standard properties.
The component in this framework is available in all Talend products with Big Data and in Talend Data Fabric.
-
Spark Batch: see tHiveInput properties for Apache Spark Batch.
The component in this framework is available in all Talend products with Big Data and Talend Data Fabric.
-
Spark Streaming: see tHiveInput properties for Apache Spark Streaming.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.