Skip to main content Skip to complementary content

Defining Spark-submit scripts connection parameters with Spark Universal

The Spark-submit scripts mode allows you to leverage a HPE Ezmeral Data Fabric v9.1.x cluster to run your Spark Batch Jobs.

For further information about HPE Data Fabric, see the documentation.

You can also use this mode with other clusters than HPE Data Fabric. This is because a Spark-submit script is designed to work with all of Spark’s supported cluster managers, as documented in cluster managers from the Spark documentation.

Procedure

  1. Click the Run view beneath the design workspace, then click the Spark configuration view.
  2. Select Built-in from the Property type drop-down list.
    If you have already set up the connection parameters in the Repository as explained in Centralizing a Hadoop connection, you can easily reuse it. To do this, select Repository from the Property type drop-down list, then click […] button to open the Repository Content dialog box and select the Hadoop connection to be used.
    Information noteTip: Setting up the connection in the Repository allows you to avoid configuring that connection each time you need it in the Spark configuration view of your Jobs. The fields are automatically filled.
  3. Select Universal from the Distribution drop-down list, the Spark version from the Version drop-down list, and Spark-submit scripts from the Runtime mode/environment drop-down list.
  4. Specify the path to the directory on the cluster where spark-submit script is stored, for example, /opt/mapr/spark/spark-3.3.2.
  5. If you need to launch your Spark Job from Windows, specify where the winutils.exe program to be used is stored:
    • If you know where to find your winutils.exe file and you want to use it, select the Define the Hadoop home directory check box and enter the directory where your winutils.exe is stored.

    • Otherwise, leave the Define the Hadoop home directory check box clear, Talend Studio generates one by itself and automatically uses it for this Job.

  6. Enter the basic configuration information:
    Parameter Usage
    Use local timezone Select this check box to let Spark use the local time zone provided by the system.
    • If you clear this check box, Spark use UTC time zone.
    • Some components also have the Use local timezone for date check box. If you clear the check box from the component, it inherits time zone from the Spark configuration.
    Use dataset API in migrated components Select this check box to let the components use Dataset (DS) API instead of Resilient Distributed Dataset (RDD) API:
    • If you select the check box, the components inside the Job run with DS which improves performance.
    • If you clear the check box, the components inside the Job run with RDD which means the Job remains unchanged. This ensures the backwards compatibility.

    This check box is selected by default, but if you import a Job from 7.3 backwards, the check box will be cleared as those Jobs run with RDD.

    Information noteImportant: If your Job contains tDeltaLakeInput and tDeltaLakeOutput components, you must select this check box.
    Use timestamp for dataset components Select this check box to use java.sql.Timestamp for dates.

    If you leave this check box clear, java.sql.Timestamp or java.sql.Date can be used depending on the pattern.

  7. In the Spark "scratch" directory field, enter the local directory in which Talend Studio stores the temporary files.

    For example, the jar files to be transferred are stored here.

    If you launch the Job on Windows, the default disk is C:. This way, if you leave /tmp in this field, this directory is C:/tmp.
  8. If you need the Job to be resilient to failure, select the Activate checkpointing check box to enable the Spark checkpointing operation.
  9. In the Checkpoint directory field, enter the directory in which Spark stores, in the file system of the cluster, the context data of the computations such as the metadata and the generated RDDs of this computation.
  10. In the Advanced properties table, add any Spark properties you need to use to override their default counterparts used by Talend Studio.

Results

The connection details are complete, you are ready to schedule executions of your Spark Job or to run it immediately.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!