Skip to main content

tHiveOutput

Connects to a given Hive database and writes the data it receives into a given Hive table or a directory in HDFS.

When ACID is enabled on the Hive side, a Spark Job cannot delete or update a table and unless data is compacted, this Job cannot correctly read aggregated data from a Hive table, either. This is a known limitation described in the Spark bug tracking system: https://issues.apache.org/jira/browse/SPARK-15348.

Depending on the Talend product you are using, this component can be used in one, some or all of the following Job frameworks:

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!