In the Integration perspective of the Studio, create an empty
Spark Batch Job from the Job Designs node in
the Repository tree view.
For further information about how to create a Spark Batch Job, see Talend for Big Data Getting Started Guide.
In the workspace, enter the name of the component to be used and select this
component from the list that appears. In this scenario, the components are
tHDFSConfiguration, tKuduConfiguration, tFixedFlowInput, tKuduOutput,
tKuduInput and tLogRow.
The tFixedFlowInput component is used to load the
sample data into the data flow. In the real-world practice, you can use other
components such as tFileInputDelimited, alone or even
with a tMap, in the place of
tFixedFlowInput to design a sophisticated process to
prepare your data to be processed.
Connect tFixedFlowInput to tKuduOutput using the Row > Main link.
Connect tMongoDBInput to tLogRow using the Row > Main link.
Connect tFixedFlowInput to tMongoDBInput using the Trigger > OnSubjobOk link.
Leave tHDFSConfiguration and tKuduConfiguration alone without any
connection.
Did this page help you?
If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!