Skip to main content Skip to complementary content

tFileOutputXML properties for Apache Spark Streaming

These properties are used to configure tFileOutputXML running in the Spark Streaming Job framework.

The Spark Streaming tFileOutputXML component belongs to the File and the XML families.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local system.

The configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

 

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

The fields that come after are pre-filled in using the fetched data.

For further information about the Hadoop Cluster node, see the Getting Started Guide.

Row tag

Specify the tag that will wrap data and structure per row.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Folder

Browse to, or enter the path pointing to the data to be used in the file system.

This path must point to a folder rather than a file.

The button for browsing does not work with the Spark Local mode; if you are using the other Spark Yarn modes that the Studio supports with your distribution, ensure that you have correctly configured the connection in a configuration component in the same Job, such as tHDFSConfiguration. Use the configuration component depending on the filesystem to be used.

Action

Select an operation for writing data:

Create: Creates a file and write data in it.

Overwrite: Overwrites the file existing in the directory specified in the Folder field.

Compress the data

Select the Compress the data check box to compress the output data.

Advanced settings

Root tags

Specify one or more root tags to wrap the whole output file structure and data. The default root tag is root.

Output format

Define the output format.

  • Column: The columns retrieved from the input schema.

  • As attribute: select check box for the column(s) you want to use as attribute(s) of the parent element in the XML output.

Information noteNote:

If the same column is selected in both the Output format table as an attribute and in the Use dynamic grouping setting as the criterion for dynamic grouping, only the dynamic group setting will take effect for that column.

Use schema column name: By default, this check box is selected for all columns so that the column labels from the input schema are used as data wrapping tags. If you want to use a different tag than from the input schema for any column, clear this check box for that column and specify a tag label between quotation marks in the Label field.

Use dynamic grouping

Select this check box if you want to dynamically group the output columns. Click the plus button to add one ore more grouping criteria in the Group by table.

Column: Select a column you want to use as a wrapping element for the grouped output rows.

Attribute label: Enter an attribute label for the group wrapping element, between quotation marks.

Custom encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling. The supported encodings depend on the JVM that you are using. For more information, see https://docs.oracle.com.

Advanced separator (for numbers)

Select this check box to modify the separators used for numbers:

Thousands separator: define separators for thousands.

Decimal separator: define separators for decimals.

Write empty batches Select this check box to allow your Spark Job to create an empty batch when the incoming batch is empty.

For further information about when this is desirable behavior, see this discussion.

Use local timezone for date Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!