Defining Dataproc connection parameters with Spark Universal
- Big Data
- Big Data Platform
- Cloud Big Data
- Cloud Big Data Platform
- Cloud Data Fabric
- Data Fabric
- Qlik Talend Cloud Enterprise Edition
- Qlik Talend Cloud Premium Edition
- Real-Time Big Data Platform
About this task
Talend Studio connects to a Dataproc cluster to run the Job from this cluster. Talend Studio is compatible with Dataproc 2.0.x and 2.1 versions.
Procedure
- Click the Run view beneath the design workspace, then click the Spark configuration view.
-
Select Built-in from the Property
type drop-down list.
If you have already set up the connection parameters in the Repository as explained in Centralizing a Hadoop connection, you can easily reuse it. To do this, select Repository from the Property type drop-down list, then click […] button to open the Repository Content dialog box and select the Hadoop connection to be used.Information noteTip: Setting up the connection in the Repository allows you to avoid configuring that connection each time you need it in the Spark configuration view of your Jobs. The fields are automatically filled.
- Select Universal from the Distribution drop-down list, the Spark version from the Version drop-down list, and Dataproc from the Runtime mode/environment drop-down list.
-
Enter the basic configuration information:
Parameter Usage Use local timezone Select this check box to let Spark use the local time zone provided by the system. Information noteNote:- If you clear this check box, Spark use UTC time zone.
- Some components also have the Use local timezone for date check box. If you clear the check box from the component, it inherits time zone from the Spark configuration.
Use dataset API in migrated components Select this check box to let the components use Dataset (DS) API instead of Resilient Distributed Dataset (RDD) API: - If you select the check box, the components inside the Job run with DS which improves performance.
- If you clear the check box, the components inside the Job run with RDD which means the Job remains unchanged. This ensures the backwards compatibility.
This check box is selected by default, but if you import a Job from 7.3 backwards, the check box will be cleared as those Jobs run with RDD.
Information noteImportant: If your Job contains tDeltaLakeInput and tDeltaLakeOutput components, you must select this check box.Use timestamp for dataset components Select this check box to use java.sql.Timestamp for dates. Information noteNote: If you leave this check box clear, java.sql.Timestamp or java.sql.Date can be used depending on the pattern.Parallelize output files writing Select this checkbox to enable the Spark Batch Job to run multiple threads in parallel when writing output files. This option improves the performance of the execution time. When you leave this checkbox cleared, the output files are written sequentially within one thread.
On subJobs level, each subJob is treated sequentially. Only the output file inside the subJob is parallelized.
This option is only available for Spark Batch Jobs containing the following output components:- tAvroOutput
- tFileOutputDelimited (only when the Use dataset API in migrated components checkbox is selected)
- tFileOutputParquet
Information noteImportant: To avoid memory problems during the execution of the Job, you need to take into account the size of the files being written and the execution environment capacity before using this parameter. -
Complete the Dataproc parameters:
Parameter Usage Project ID Enter the ID of your Google Cloud Platform project. Cluster ID Enter the ID of your Dataproc cluster to be used. Region Enter the name of the Google Cloud region to be used. Google Storage staging bucket As a Talend Job expects its dependent jar files for execution, specify the Google Storage directory to which these jar files are transferred so that your Job can access these files at execution. Provide Google Credentials Leave this check box clear, when you launch your Job from a given machine in which Google Cloud SDK has been installed and authorized to use your user account credentials to access Google Cloud Platform. In this situation, this machine is often your local machine. Credential type Select the mode to be used to authenticate to your project: - Service account: authenticate using a Google account that is associated with your Google Cloud Platform project. When selecting this mode, the parameters to be defined is Path to Google Credentials file.
- OAuth2 Access Token: authenticate the access using OAuth credentials. When selecting this mode, the parameter to be defined is OAuth2 Access Token.
Service account Enter the path to the credentials file associated to the user account to be used. This file must be stored in the machine in which your Talend Job is actually launched and executed. OAuth2 Access Token Enter an access token. Information noteImportant: The token is only valid for one hour. Talend Studio does not perform the token refresh operation so you must regenerate a new one beyond the one-hour limit.You can generate an OAuth Access Token on Google Developers OAuth Playground by going to BigQuery API v2 and choosing all the needed permissions (bigquery, devstorage.full_control, and cloud-platform).
- In the Spark "scratch" directory field, enter the directory in which Talend Studio stores in the local system the temporary files such as the jar files to be transferred. If you launch the Job on Windows, the default disk is C:. So if you leave /tmp in this field, this directory is C:/tmp.
- If you need the Job to be resilient to failure, select the Activate checkpointing check box to enable the Spark checkpointing operation. In the Checkpoint directory field, enter the directory in which Spark stores, in the file system of the cluster, the context data of the computations such as the metadata and the generated RDDs of this computation.
- In the Advanced properties table, add any Spark properties you need to use to override their default counterparts used by Talend Studio.
Results
Did this page help you?
If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!