Skip to main content Skip to complementary content

tGoogleCloudConfiguration properties for Apache Spark Streaming

These properties are used to configure tGoogleCloudConfiguration running in the Spark Streaming Job framework.

The Spark Streaming tGoogleCloudConfiguration component belongs to the Storage family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Project identifier

Enter the ID of your Google Cloud Platform project.

If you are not certain about your project ID, check it in the Manage Resources page of your Google Cloud Platform services.

Use Google Cloud Platform credentials file

Leave this check box clear, when you launch your Job from a given machine in which Google Cloud SDK has been installed and authorized to use your user account credentials to access Google Cloud Platform. In this situation, this machine is often your local machine.

When you launch your Job from a remote machine, such as a Jobserver, select this check box, then select the format of your credentials file and in the Path to Google Credentials file field that is displayed, enter the directory in which this credential file is stored in the Jobserver machine.

In the Service account Id field that is displayed, enter the ID of the service account for which this P12 credentials file has been created.

For further information about this Google Credentials file, see the administrator of your Google Cloud Platform or visit Google Cloud Platform Auth Guide.

Usage

Usage rule

This component is used only when your Job needs to connect to Google Cloud Platform while the cluster you use to run Spark is not Dataproc.

It works standalone in a subJob to provide the connection configuration for the whole Job.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!