tLogRow properties for Apache Spark Streaming
These properties are used to configure tLogRow running in the Spark Streaming Job framework.
The Spark Streaming tLogRow component belongs to the Misc family.
This component is available in Talend Real-Time Big Data Platform and Talend Data Fabric.
Basic settings
Properties | Description |
---|---|
Schema and Edit schema |
|
Sync columns | Click to synchronize the output file schema with the input file schema. The Sync function is available only when the component is linked with the preceding component using a Row connection. |
Basic | Displays the output flow in basic mode. |
Table | Displays the output flow in table cells. |
Vertical |
Displays each row of the output flow as a key-value list. With this mode selected, you can choose to show either the unique name or the label of component, or both of them, for each output row. |
Separator (For Basic mode only) |
Enter the separator which will delimit data on the Log display. |
Print header (For Basic mode only) |
Select this check box to include the header of the input flow in the output display. |
Print component unique name in front of each output row (For Basic mode only) |
Select this check box to show the unique name the component in front of each output row to differentiate outputs in case several tLogRow components are used. |
Print schema column name in front of each value (For Basic mode only) |
Select this check box to retrieve column labels from output schema. |
Use fixed length for values (For Basic mode only) |
Select this check box to set a fixed width for the value display. |
Advanced settings
Properties | Description |
---|---|
Use local timezone for date | Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data. |
Usage
Usage guidance | Description |
---|---|
Usage rule |
This component is used as an intermediate or an end step. This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |