Skip to main content Skip to complementary content

tFileOutputLDIF Standard properties

These properties are used to configure tFileOutputLDIF running in the Standard Job framework.

The Standard tFileOutputLDIF component belongs to the File family.

The component in this framework is available in all Talend products.

Basic settings

File Name

Specify the path to the LDIF output file.

Information noteWarning: Use absolute path (instead of relative path) for this field to avoid possible errors.

Wrap

Specify the number of characters at which the line will be wrapped.

Change type

Select a changetype that defines the operation you want to perform on the entries in the output LDIF file.
  • Add: the LDAP operation for adding the entry.

  • Modify: the LDAP operation for modifying the entry.

  • Delete: the LDAP operation for deleting the entry.

  • Modrdn: the LDAP operation for modifying an entry's RDN (Relative Distinguished Name).

  • Default: the default LDAP operation.

Multi-Values / Modify Detail

Specify the attributes for multi-value fields when Add or Default is selected from the Change type list or provide the detailed modification information when Modify is selected from the Change type list.

  • Column: The Column cells are automatically filled with the defined schema column names.

  • Operation: Select an operation to be performed on the corresponding field. This column is available only when Modify is selected from the Change type list.

  • MultiValue: Select the check box if the corresponding field is a multi-value field.

  • Separator: Specify the value separator in the corresponding multi-value field.

  • Binary: Select the check box if the corresponding field represents binary data.

  • Base64: Select the check box if the corresponding field should be base-64 encoded. The base-64 encoded data in the LDIF file is represented by the :: symbol.

This table is available only when Add, Modify, or Default is selected from the Change type list.

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Sync columns

Click to synchronize the output file schema with the input file schema. The Sync function only displays once the Row connection is linked with the Output component.

Append

Select this check box to add the new rows at the end of the file.

Advanced settings

Enforce safe base 64 conversion

Select this check box to enable the safe base-64 encoding. For more detailed information about the safe base-64 encoding, see https://www.ietf.org/rfc/rfc2849.txt.

Create directory if not exists

This check box is selected by default. It creates the directory that holds the output delimited file, if it does not already exist.

Custom the flush buffer size

Select this check box to specify the number of lines to write before emptying the buffer.

Row number

Type in the number of lines to write before emptying the buffer.

This field is available only when the Custom the flush buffer size check box is selected.

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for DB data handling.

Don't generate empty file

Select this check box if you do not want to generate empty files.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or transferred to an output component. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Usage rule

This component is used to write an LDIF file with data passed on from an input component using a Row > Main connection.

Limitation

Due to license incompatibility, one or more JARs required to use this component are not provided. You can install the missing JARs for this particular component by clicking the Install button on the Component tab view. You can also find out and add all missing JARs easily on the Modules tab in the Integration perspective of your studio. For details, see Installing external modules. You can find more details about how to install external modules in Talend Help Center (https://help.talend.com).

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!