tDeltaLakeInput properties for Apache Spark Batch
These properties are used to configure tDeltaLakeInput running in the Spark Batch Job framework.
The Spark Batch tDeltaLakeInput component belongs to the Technical family.
The component in this framework is available in all Talend products with Big Data and Talend Data Fabric.
Basic settings
Define the source of the dataset |
Select the source of the dataset you want to use between the following options: |
Metastore: Retrieves data in table format from a metastore. |
|
Files: Retrieves data in delta format from files. |
|
Query: Retrieves data from SQL queries. | |
Define a storage configuration component |
Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS. If you leave this check box clear, the target file system is the local system. The configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system. This field is available only when you select Files from the Define the source of the dataset drop-down list in the Basic settings view. |
Property type |
Either Built-In or Repository. |
Built-In: No property data stored centrally. |
|
Repository: Select the repository file where the properties are stored. The properties are stored centrally under the Hadoop Cluster node of the Repository tree. The fields that come after are pre-filled in using the fetched data. For further information about the Hadoop Cluster node, see the Getting Started Guide. |
|
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
Spark automatically infers data types for the columns in a PARQUET schema. In a Talend Job for Apache Spark, the Date type is inferred and stored as int96. |
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
|
Database |
Enter, in double quotation marks, the name of the Delta Lake database to be used. This field is available only when you select Metastore from the Define the source of the dataset drop-down list in the Basic settings view. |
Table |
Enter, in double quotation marks, the name of the table to be used. This field is available only when you select Metastore from the Define the source of the dataset drop-down list in the Basic settings view. |
Folder/File |
Browse to, or enter the path pointing to the data to be used in the file system. If the path you set points to a folder, this component will
read all of the files stored in that folder, for example,
/user/talend/in; if sub-folders exist, the sub-folders are automatically
ignored unless you define the property
spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive to be
true in the Advanced properties table in the
Spark configuration tab.
If you want to specify more than one files or directories in this field, separate each path using a comma (,). The button for browsing does not work with the Spark Local mode; if you are using the other Spark Yarn modes that the Studio supports with your distribution, ensure that you have correctly configured the connection in a configuration component in the same Job, such as tHDFSConfiguration. Use the configuration component depending on the filesystem to be used. This field is available only when you select Files from the Define the source of the dataset drop-down list in the Basic settings view. |
SQL Query | Enter the SQL query you want to use to retrieve data. This field is available only when you select SQL Query from the Define the source of the dataset drop-down list in the Basic settings view. |
Specify Time Travel timestamp |
Select this check box to read a given timestamp-defined snapshot of the datasets to be used. The format used by Deltalake is yyyy-MM-dd HH:mm:ss. Delta Lake systematically creates slight differences between the upload time of a file and the metadata timestamp of this file. Bear in mind these differences when you need to filter data. |
Specify Time Travel version | Select this check box to read a versioned snapshot of the datasets to be used. |
Usage
Usage rule |
This component is used as an end component and requires an input link. This Delta Lake layer is built on top of your Data Lake system, thus to be connected as part of your Data Lake system using the configuration component corresponding to your Data Lake system, for example, tAzureFSCofiguration. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |