Design the data flow of the Job working with S3 and Databricks on AWS
Procedure
In the Integration perspective of the Studio, create an empty
Spark Batch Job from the Job Designs node in
the Repository tree view.
In the workspace, enter the name of the component to be used and select this
component from the list that appears. In this scenario, the components are
tS3Configuration,
tFixedFlowInput,
tFileOutputParquet,
tFileInputParquet and
tLogRow.
The tFixedFlowInput component is used to load the
sample data into the data flow. In the real-world practice, you could use the
File input components, as well as the processing components, to design a
sophisticated process to prepare your data to be processed.
Connect tFixedFlowInput to tFileOutputParquet using the Row > Main link.
Connect tFileInputParquet to tLogRow using the Row > Main link.
Connect tFixedFlowInput to tFileInputParquet using the Trigger > OnSubjobOk link.
Leave tS3Configuration alone without any
connection.
Did this page help you?
If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!