Writing and reading data from S3 (Databricks on AWS)
In this scenario, you create a Spark Batch Job using tS3Configuration and the Parquet components to write data on S3 and then read the data from S3.
This scenario applies only to subscription-based Talend products with Big Data.
The sample data reads as
follows:
01;ychen
This data contains a user name and the ID number distributed to this user.
Note that the sample data is created for demonstration purposes only.