tPubSubInput properties for Apache Spark Streaming
These properties are used to configure tPubSubInput running in the Spark Streaming Job framework.
The Spark Streaming tPubSubInput component belongs to the Messaging family.
If you are using Dataproc 1.4 and onwards as your Spark cluster, make sure to select the Allow API access to all Google Cloud services in the same project at the cluster creation on Google Cloud Platform to be able to run PubService.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Define a Google Cloud configuration component |
If you are using Dataproc as your Spark cluster, clear this check box. Otherwise, select this check box to allow the Pub/Sub component to use the Google Cloud configuration information provided by a tGoogleCloudConfiguration component. |
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Note that the schema of this component is read-only. It stores the message body sent from the message producer. |
Output type |
Select the type of the data to be sent to the next component. Typically, using String is recommended, because tPubSubInput can automatically translate the PubSub byte[] messages into strings to be processed by the Job. However, in case that the format of the messages is not known to tPubSubInput, such as Protobuf, you can select byte and then use a Custom code component such as tJavaRow to deserialize the messages into strings so that the other components of the same Job can process these messages. |
Topic name |
Enter the name of topic from which you want to consume the messages. |
Subscription name |
Enter the name of the subscription that needs to consume the specified topic. If the subscription exists, it must be connected to the given topic; if the subscription does not exist, it is created and connected to the given topic at runtime. |
Advanced settings
Storage level |
From the Storage level drop-down list that is displayed, select how the cached RDDs are stored, such as in memory only or in memory and on disk. For further information about each of the storage level, see https://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence. |
Usage
Usage rule |
This component is used as a start component and requires an output link. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
PubSub access permissions |
When you use Pub/Sub with a Dataproc cluster, ensure that this cluster has the appropriate permissions to access the Pub/Sub service. To do this, you can create the Dataproc cluster by checking
Allow API access to all Google Cloud services in
the same project in the advanced options on Google Cloud Platform, or
via the command line, assigning the scopes explicitly (the following
example is for a low-resource test cluster):
|