tDeltaLakeRow Standard properties
These properties are used to configure tDeltaLakeRow running in the Standard Job framework.
The Standard tDeltaLakeRow component belongs to the Databases family.
The component in this framework is available in all subscription-based Talend products.
Basic settings
Database |
Select the desired database type from the list and click Apply. |
Property Type |
Select the way the connection details will be set.
This property is not available when other connection component is selected from the Connection Component drop-down list. |
Connection Component |
Select the component whose connection you want to reuse from the drop-down list. Information noteWarning: If this component is configured to perform operations to a
table, it is strongly recommended that you use an existing connection with the
auto-commit function enabled for this component. You can establish a connection of
this type using the tDeltaLakeConnection component, with the Auto
Commit option selected in the Advanced
settings view.
|
JDBC URL |
The JDBC URL of the Delta Lake database to be used, which begins with jdbc:spark:// (already presented). If you have installed the 8.0.1-R2023-05 Talend Studio Monthly update or a later one delivered by Talend, the JDBC URL of the Delta Lake database begins with jdbc:databricks// (already presented). See section Configure JDBC URL at JDBC and ODBC drivers and configuration parameters for related information. Information noteNote: There will be no migration operation for Delta Lake components when the 8.0.1-R2023-05
Talend Studio
Monthly update or a later one delivered by Talend
is installed. In this case, you may need to update the JDBC URL and other related
settings manually for existing Jobs to make sure the JDBC URL begins with
jdbc:databricks//.
|
Drivers |
Complete this table to load the driver JARs needed. To do this, click the [+] button under the table to add as many rows as needed, each row for a driver JAR, then select the cell and click the [...] button at the right side of the cell to open the Module dialog box from which you can select the driver JAR to be used. The driver JAR SparkJDBC42-2.6.14.1018.jar is used for the Delta Lake databases (already presented). If you have installed the 8.0.1-R2023-05 Talend Studio Monthly update or a later one delivered by Talend, the databricks-jdbc-{version_number}.jar driver will be used (already presented). For more information, see Importing a database driver. Information noteNote: There will be no migration operation for Delta Lake components when the
8.0.1-R2023-05 Talend Studio
Monthly update or a later one delivered by Talend
is installed. In this case, you may need to update the driver and other related
settings manually for existing Jobs to make sure
databricks-jdbc-{version_number}.jar is used.
|
Driver Class |
Enter the class name for the specified driver between double quotation marks. For the SparkJDBC42-2.6.14.1018.jar driver, the name to be entered is com.simba.spark.jdbc.Driver (already presented). If you have installed the 8.0.1-R2023-05 Talend Studio Monthly update or a later one delivered by Talend, the databricks-jdbc-{version_number}.jar driver will be used and the driver class to be entered is com.databricks.client.jdbc.Driver (already presented). Information noteNote: There will be no migration operation for Delta Lake components when the
8.0.1-R2023-05 Talend Studio
Monthly update or a later one delivered by Talend
is installed. In this case, you may need to update driver class and other related
settings manually for existing Jobs to make sure the driver class
com.databricks.client.jdbc.Driver is used.
|
User Id and Password |
The database user authentication data. See section Username and password authentication at JDBC and ODBC drivers and configuration parameters for related information. To enter the password, click the [...] button next to the password field, enter the password in double quotes in the pop-up dialog box, and click OK to save the settings. |
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.
Click Edit schema to make changes to the schema. If you make changes, the schema automatically becomes built-in.
|
Table Name |
The name of the table to be processed. |
Query Type and Query |
Specify the database query statement paying particularly attention to the properly sequence of the fields which must correspond to the schema definition.
|
Guess Query |
Click this button to generate query in the Query field based on the defined table and schema. |
Specify a data source alias |
Select this check box and in the Data source alias field displayed, specify the alias of a data source created on Talend Runtime side to use the shared connection pool defined in the data source configuration. This option works only when you deploy and run your Job in Talend Runtime. If you use the component's own DB configuration, your data source connection will be closed at the end of the component. To prevent this from happening, use a shared DB connection with the data source alias specified. This property is not available when other connection component is selected from the Connection Component drop-down list. |
Die on error |
Select the check box to stop the execution of the Job when an error occurs. Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a connection. |
Advanced settings
Propagate QUERY's recordset |
Select this check box to propagate the result of the query to the output flow. From the use column list displayed, you need to select a column into which the query result will be inserted. This option allows the component to have a different schema from that of the preceding component. Moreover, the column that holds the query's recordset should be set to the Object type and this component is usually followed by a tParseRecordSet component. |
Use PreparedStatement |
Select this check box if you want to query the database using a prepared statement. In the Set PreparedStatement Parameters table displayed, specify the value for each parameter represented by a question mark ? in the SQL statement defined in the Query field.
For a related use case of this property, see Using PreparedStatement objects to query data. |
Commit every |
Specify the number of rows to be processed before committing batches of rows together into the database. This option ensures transaction quality (but not rollback) and, above all, better performance at executions. |
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the Job level as well as at each component level. |
Global Variables
ERROR_MESSAGE |
The error message generated by the component when an error occurs. This is an After variable and it returns a string. |
QUERY |
The query statement being processed. This is a Flow variable and it returns a string. |
Usage
Usage rule |
This component offers the flexibility of the DB query for any database using a DeltaLake connection and covers all possible SQL queries. |
Dynamic settings |
Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your database connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access database tables having the same data structure but in different databases, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio. For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Dynamic schema and Creating a context group and define context variables in it. |