tBigQueryOutput properties for Apache Spark Batch
These properties are used to configure tBigQueryOutput running in the Spark Batch Job framework.
The Spark Batch tBigQueryOutput component belongs to the Databases family.
The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.
Basic settings
Properties | Description |
---|---|
Dataset |
Enter the name of the dataset to which the table to be created or updated belongs. When you use Google BigQuery with Dataproc, in Google Cloud Platform, select the same region for your BigQuery dataset as for the Dataproc cluster to be run. |
Table |
Enter the name of the table to be created or updated. |
Schema and Edit Schema |
|
Table operations |
Select the operation to be performed on the defined table:
|
Data operation |
Select the operation to be performed on the incoming data:
|
Usage
Usage guidance | Description |
---|---|
Usage rule |
This is an input component. It sends data extracted from BigQuery to the component that follows it. Place a tBigQueryConfiguration component in the same Job because it needs to use the BigQuery configuration information provided by tBigQueryConfiguration. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |