Skip to main content Skip to complementary content

tOracleInput properties for Apache Spark Batch

These properties are used to configure tOracleInput running in the Spark Batch Job framework.

The Spark Batch tOracleInput component belongs to the Databases family.

This component also allows you to connect and read data from a RDS Oracle database.

The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Centralizing database metadata.

Use an existing connection

Select this check box and in the Component List drop-down list, select the desired connection component to reuse the connection details you already defined.

Information noteNote: When a Job contains the parent Job and the child Job, do the following if you want to share an existing connection between the parent Job and the child Job (for example, to share the connection created by the parent Job with the child Job).
  1. In the parent level, register the database connection to be shared in the Basic settings view of the connection component which creates that very database connection.
  2. In the child level, use a dedicated connection component to read that registered database connection.

For an example about how to share a database connection across Job levels, see Sharing a database connection.

Connection type

The available drivers are:

  • Oracle OCI: Select this connection type to use Oracle Call Interface with a set of C-language software APIs that provide an interface to the Oracle database.

  • Oracle Custom: Select this connection type to access a clustered database. With this type of connection, the Username and the Password fields are deactivated and you need to enter the connection URL in the URL field that is displayed.

    For further information about the valid form of this URL, see JDBC Connection strings from the Oracle documentation.

  • Oracle Service Name: Select this connection type to use the TNS alias that you give when you connect to the remote database.

  • WALLET: Select this connection type to store credentials in an Oracle wallet.

  • Oracle SID: Select this connection type to uniquely identify a particular database on a system.

DB Version

Select the Oracle version in use.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Oracle schema

Oracle schema name.

Username and Password

DB user authentication data.

To enter the password, click the [...] button next to the password field, enter the password in double quotes in the pop-up dialog box, and click OK to save the settings.

Schema and Edit Schema

A schema is a row description, it defines the number of fields to be processed and passed on to the next component. The schema is either Built-in or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Table Name

Type in the name of the table from which you need to read data.

Query type and Query

Specify the database query statement paying particularly attention to the properly sequence of the fields which must correspond to the schema definition.

If you are using Spark V2.0 onwards, the Spark SQL does not recognize the prefix of a database table anymore. This means that you must enter only the table name without adding any prefix that indicates for example the schema this table belongs to.

For example, if you need to perform a query in a table system.mytable, in which the system prefix indicates the schema that the mytable table belongs to, in the query, you must enter mytable only.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are creating. The properties are separated by semicolon and each property is a key-value pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing connection check box is selected.

Spark SQL JDBC parameters

Add the JDBC properties supported by Spark SQL to this table. For a list of the user configurable properties, see JDBC to other database.

This component automatically set the url, dbtable and driver properties by using the configuration from the Basic settings tab.

Trim all the String/Char columns

Select this check box to remove leading and trailing whitespace from all the String/Char columns.

Trim column

Remove leading and trailing whitespace from defined columns.

Enable partitioning

Select this check box to read data in partitions.

Define, in double quotation marks, the following parameters to configure the partitioning:
  • Partition column: the numeric column used as partition key.

  • Lower bound of the partition stride and Upper bound of the partition stride: enter the upper bounds and the lower bound to determine the partition stride. These bounds do not filter the table rows. All rows in the table are partitioned and returned.

  • Number of partitions: the number of partitions into which the table rows are split. Each Spark worker handles only one of the partitions at a time.

The average size of the partitions is the result of the difference between the upper bound and the lower bound divided by the number of partitions, that is to say, (upperBound - lowerBound)/partitionNumber, while the first and the last partitions also include all the other rows that are not contained in the other partitions.

For example, to partition 1000 rows into 4 partitions, if you enter 0 for the lower bound and 1000 for the upper bound, each partition will contain 250 rows and so the partitioning is even. If you enter 250 for the lower bound and 750 for the upper bound, the second and the third partition will each contain 125 rows and the first and the last partitions each 375 rows. With this configuration, the partitioning is skewed.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component should use a tOracleConfiguration component present in the same Job to connect to Oracle. You need to select the Use an existing connection check box and then select the tOracleConfiguration component to be used.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!