tFileInputDelimited Standard properties
These properties are used to configure tFileInputDelimited running in the Standard Job framework.
The Standard tFileInputDelimited component belongs to the File family.
The component in this framework is available in all Talend products.
Basic settings
Property type |
Either Built-In or Repository. |
|
Built-In: No property data stored centrally. |
|
Repository: Select the repository file where the properties are stored. |
Use existing dynamic |
Select this check box if you want to use an existing dynamic schema set in a tSetDynamicSchema component. |
File Name/Stream |
File name: Name and path of the file to be processed. Stream: The data flow to be processed. The data must be added to the flow in order for tFileInputDelimited to fetch these data via the corresponding representative variable. This variable can be already pre-defined in Talend Studio or provided by the context or the components you are using along with this component; otherwise, you can define it manually and use it according to the design of your Job, for example, using tJava or tJavaFlex. In order to avoid the inconvenience of hand writing, you could select the variable of interest from the auto-completion list (Ctrl+Space) to fill the current field on condition that this variable has been properly defined. Information noteWarning: Use absolute path (instead of relative path) for
this field to avoid possible errors.
|
Row separator |
The separator used to identify the end of a row. |
Field separator |
Enter a character, a string, or a regular expression to separate fields for the transferred data. Information noteNote: With CSV
options selected, the field separator can only be a
single character. In this case, if you enter multiple characters in this
field, only the first character acts as the field separator.
|
CSV options |
Select this check box to specify the following CSV parameters:
It is recommended to use standard escape character, that is "\". Otherwise, you should set the same character for Escape char and Text enclosure. For example, if the escape character is set to "\", the text enclosure can be set to any other character. On the other hand, if the escape character is set to other character rather than "\", the text enclosure can be set to any other characters. However, the escape character will be changed to the same character as the text enclosure. For instance, if the escape character is set to "#" and the text enclosure is set to "@", the escape character will be changed to "@", not "#". |
Header |
Enter the number of rows to be skipped in the beginning of file. Note that when dynamic schema is used, the first row of the input data is always treated as the header row no matter whether the Header field value is set or not. For more information about dynamic schema, see Dynamic schema. |
Footer |
Number of rows to be skipped at the end of the file. |
Limit |
Maximum number of rows to be processed. If Limit = 0, no row is read or processed. |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
Note that if the input value of any non-nullable primitive field is null, the row of data including that field will be rejected. This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Dynamic schema. This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables. When using the dynamic schema feature, the dynamic column does not contain the actual column names of the input file. If you want your output flow to include the actual column names, make sure that your input file has a header row and the Header value is set properly. |
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
Skip empty rows |
Select this check box to skip the empty rows. |
Uncompress as zip file |
Select this check box to uncompress the input file. |
Die on error |
Select the check box to stop the execution of the Job when an error occurs. Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link. To catch the FileNotFoundException, you also need to select this check box. |
Advanced settings
Advanced separator (for numbers) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.). |
Extract lines at random |
Select this check box to set the number of lines to be extracted randomly. |
Encoding |
Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling. The supported encodings depend on the JVM that you are using. For more information, see https://docs.oracle.com. |
Trim all column |
Select this check box to remove the leading and trailing whitespaces from all columns. When this check box is cleared, the Check column to trim table is displayed, which lets you select particular columns to trim. |
Check each row structure against schema |
Select this check box to check whether the total number of columns in each row is consistent with the schema. If not consistent, an error message will be displayed on the console. |
Check date |
Select this check box to check the date format strictly against the input schema. |
Check columns to trim |
This table is filled automatically with the schema being used. Select the check box(es) corresponding to the column(s) to be trimmed. |
Split row before field |
Select this check box to split rows before splitting fields. |
Permit hexadecimal (0xNNN) or octal (0NNNN) for numeric types - it will act the opposite for Byte |
Select this check box if any of your numeric types (long, integer, short, or byte type), will be parsed from a hexadecimal or octal string. In the table that appears, select the check box next to the column or columns of interest to transform the input string of each selected column to the type defined in the schema. Select the Permit hexadecimal or octal check box to select all the columns. This table appears only when the Permit hexadecimal (0xNNN) or octal (0NNNN) for numeric types - it will act the opposite for Byte check box is selected. |
tStatCatcher Statistics |
Select this check box to gather the processing metadata at the Job level as well as at each component level. |
Global Variables
Global Variables |
NB_LINE: the number of rows processed. This is an After variable and it returns an integer. ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. A Flow variable functions during the execution of a component while an After variable functions after the execution of the component. To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it. For more information about variables, see Using contexts and variables. |
Usage
Usage rule |
Use this component to read a file and separate fields contained in this file using a defined separator. It allows you to create a data flow using a Row > Main link or via a Row > Reject link in which case the data is filtered by data that does not correspond to the type defined. For further information, please see Procedure. |
Limitation |
Due to license incompatibility, one or more JARs required to use this component are not provided. You can install the missing JARs for this particular component by clicking the Install button on the Component tab view. You can also find out and add all missing JARs easily on the Modules tab in the Integration perspective of Talend Studio. For details, see Installing external modules. |