Skip to main content Skip to complementary content

tFileInputParquet Standard properties

These properties are used to configure tFileInputParquet running in the Standard Job framework.

The Standard tFileInputParquet component belongs to the File family.

Information noteNote: If you are using a Windows platform, make sure Hadoop Winutils and Microsoft Visual C++ 2010 Service Pack 1 Redistributable Package MFC Security Update are installed before using this component. The two links also give the installation information.

The component in this framework is available in all subscription-based Talend products.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

  • Built-In: You create and store the schema locally for this component only.

  • Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Click Edit schema to make changes to the schema. If you make changes, the schema automatically becomes built-in.

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Dynamic schema.

This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables.

File name

Name or path to the input file and/or the variable to be used.

For further information about how to define and use a variable in a Job, see Using contexts and variables.

Information noteWarning: Use absolute path (instead of relative path) for this field to avoid possible errors.

Use external Hadoop dependencies

Select this check box to use external Hadoop dependencies and enter the path in the File name field respecting the following format: "file:///path/input.parquet".

Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or transferred to an output component. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

FILE_PATH: the path pointing to the folder or the file being processed on Box. This is a Flow variable and it returns a string.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

Use this component to retrieve data from parquet files. This component also passes retrieved data to the subsequent component.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!