Skip to main content Skip to complementary content

tJDBCConfiguration properties for Apache Spark Batch

These properties are used to configure tJDBCConfiguration running in the Spark Batch Job framework.

The Spark Batch tJDBCConfiguration component belongs to the Storage and the Databases families.

The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

JDBC URL

The JDBC URL of the database to be used. For example, the JDBC URL for the Amazon Redshift database is jdbc:redshift://endpoint:port/database.

  • If you are using Spark V1.3, this URL should contain the authentication information, such as:
    jdbc:mysql://XX.XX.XX.XX:3306/Talend?user=ychen&password=talend
  • If you are using Databricks, this JDBC URL value can be found on the JDBC/ODBC tab of the Web UI of your Databricks cluster. To access this tab, on the Configuration tab of your Databricks cluster page, scroll down to the bottom of the page and click the JDBC/ODBC tab.

Driver JAR

Complete this table to load the driver JARs needed. To do this, click the [+] button under the table to add as many rows as needed, each row for a driver JAR, then select the cell and click the [...] button at the right side of the cell to open the Module dialog box from which you can select the driver JAR to be used. For example, the driver jar RedshiftJDBC41-1.1.13.1013.jar for the Redshift database.

For more information, see Importing a database driver.

Driver Class

Enter the class name for the specified driver between double quotation marks. For example, for the RedshiftJDBC41-1.1.13.1013.jar driver, the name to be entered is com.amazon.redshift.jdbc41.Driver.

Username and Password

Enter the authentication information to the database you need to connect to.

To enter the password, click the [...] button next to the password field, enter the password in double quotes in the pop-up dialog box, and click OK to save the settings.

If you are using Databricks, enter token in the Username field and your Databricks token in the Password field. This token is the authentication token generated for your Databricks user account. You can generate or find this token on the User settings page of your Databricks workspace. For more information, see Manage personal access tokens from the Azure documentation.

Available only for Spark V1.4. and onwards.

Additional JDBC parameters

Specify additional connection properties for the database connection you are creating. The properties are separated by semicolon and each property is a key-value pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing connection check box is selected.

Advanced settings

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control the number of connections that stay open simultaneously. The default values given to the following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same time.

  • Max waiting time (ms): enter the maximum amount of time at the end of which the response to a demand for using a connection should be returned by the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval (in milliseconds) at the end of which the component checks the status of the connections and destroys the idle ones.

  • Min idle time for a connection to be eligible to eviction: enter the time interval (in milliseconds) at the end of which the idle connections are destroyed.

  • Soft min idle time for a connection to be eligible to eviction: this parameter works the same way as Min idle time for a connection to be eligible to eviction but it keeps the minimum number of idle connections, the number you define in the Min number of idle connections field.

Usage

Usage rule

This component is used with no need to be connected to other components.

The configuration in a tJDBCConfiguration component applies only on the JDBC related components in the same Job. In other words, the JDBC components used in a child or a parent Job that is called via tRunJob cannot reuse this configuration.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!