Reading the sample data from Azure Data Lake Storage
Procedure
Double-click tFileInputParquet to open its
Component view.
Example
Select the Define a storage configuration component
check box and then select the tAzureFSConfiguration
component you configured in the previous steps.
Click the [...] button next to Edit
schema to open the schema editor.
Click the [+] button to add the schema columns for
output as shown in this image.
Example
Click OK to validate these changes and accept the
propagation prompted by the pop-up dialog box.
In the Folder/File field, enter the name of the folder
from which you need to read data. In this scenario, it is
sample_user.
Double-click tLogRow to open its
Component view and select the Table radio button to present the result in a table.
Press F6 to run this Job.
Results
Once done, you can find your Job on the Job page on the Web UI
of your Databricks cluster and then check the execution log of your Job.
Did this page help you?
If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!