tPubSubOutput properties for Apache Spark Streaming
These properties are used to configure tPubSubOutput running in the Spark Streaming Job framework.
The Spark Streaming tPubSubOutput component belongs to the Messaging family.
If you are using Dataproc 1.4 and onwards as your Spark cluster, make sure to select the Allow API access to all Google Cloud services in the same project check box at the cluster creation on Google Cloud Platform to be able to run PubService.
This component is available in Talend Real-Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Note that the schema of this component is read-only. It stores the messages to be published. |
Define a Google Cloud configuration component |
If you are using Dataproc as your Spark cluster, clear this check box. Otherwise, select this check box to allow the Pub/Sub component to use the Google Cloud configuration information provided by a tGoogleCloudConfiguration component. |
Topic name |
Enter the name of the topic you want to publish messages to. This topic must already exist. |
Topic operation |
Select the operation to be performed on the specified topic:
|
Advanced settings
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control the number of connections that stay open simultaneously. The default values given to the following connection pool parameters are good enough for most use cases.
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The following fields are displayed once you have selected it.
|
Usage
Usage rule |
This component is used as an end component and requires an input link. This component needs a Write component such as tWriteJSONField to define a serializedValue column in the input schema to send serialized data. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
PubSub access permissions |
When you use Pub/Sub with a Dataproc cluster, ensure that this cluster has the appropriate permissions to access the Pub/Sub service. To do this, you can create the Dataproc cluster by checking
Allow API access to all Google Cloud services in
the same project in the advanced options on Google Cloud Platform, or
via the command line, assigning the scopes explicitly (the following
example is for a low-resource test cluster):
|