Selecting the Spark mode
Depending on the Spark cluster to be used, select a Spark mode for your Job.
The Spark documentation provides an exhaustive list of Spark properties and their default values at Spark Configuration. A Spark Job designed in the Studio uses this default configuration except for the properties you explicitly defined in the Spark Configuration tab or the components used in your Job.
Procedure
- Click Run to open its view and then click the Spark Configuration tab to display its view for configuring the Spark connection.
-
Select the Use local mode check box to test your Job locally.
In the local mode, the Studio builds the Spark environment in itself on the fly in order to run the Job in. Each processor of the local machine is used as a Spark worker to perform the computations.
In this mode, your local file system is used; therefore, deactivate the configuration components such as tS3Configuration or tHDFSConfiguration that provides connection information to a remote file system, if you have placed these components in your Job.
You can launch your Job without any further configuration.
-
Clear the Use local mode check box to display the
list of the available Hadoop distributions and from this list, select
the distribution corresponding to your Spark cluster to be used.
This distribution could be:
-
For this distribution, Talend supports:
-
Yarn client
-
Yarn cluster
Information noteImportant: Delta Lake is not supported on Amazon EMR. -
-
For this distribution, Talend supports:
-
Standalone
-
Yarn client
-
Yarn cluster
-
-
For this distribution, Talend supports:
-
Yarn client
-
-
For this distribution, Talend supports:
-
Yarn client
-
Yarn cluster
-
-
For this distribution, Talend supports:
-
Standalone
-
Yarn client
-
Yarn cluster
-
-
For this distribution, Talend supports:
-
Yarn cluster
-
-
Defining the Cloudera Altus connection parameters
For this distribution, Talend supports:-
Yarn cluster
Your Altus cluster should run on the following Cloud providers:-
Azure
The support for Altus on Azure is a technical preview feature.
-
AWS
-
-
As a Job relies on Avro to move data among its components, it is recommended to set your cluster to use Kryo to handle the Avro types. This not only helps avoid this Avro known issue but also brings inherent preformance gains. The Spark property to be set in your cluster is:spark.serializer org.apache.spark.serializer.KryoSerializer
If you cannot find the distribution corresponding to yours from this drop-down list, this means the distribution you want to connect to is not officially supported by Talend . In this situation, you can select Custom, then select the Spark version of the cluster to be connected and click the [+] button to display the dialog box in which you can alternatively:
-
Select Import from existing version to import an officially supported distribution as base and then add other required jar files which the base distribution does not provide.
-
Select Import from zip to import the configuration zip for the custom distribution to be used. This zip file should contain the libraries of the different Hadoop/Spark elements and the index file of these libraries.
Note that custom versions are not officially supported by Talend . Talend and its community provide you with the opportunity to connect to custom versions from the Studio but cannot guarantee that the configuration of whichever version you choose will be easy. As such, you should only attempt to set up such a connection if you have sufficient Hadoop and Spark experience to handle any issues on your own.
For a step-by-step example about how to connect to a custom distribution and share this connection, see Hortonworks.
Did this page help you?
If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!