Configuring the connection to the S3 service to be used by Spark
Procedure
Double-click tS3Configuration to open its Component view.
Spark uses this component to connect to the S3 system in which your Job writes
the actual business data. If you place neither
tS3Configuration nor any other configuration
component that supports Databricks on AWS, this business data is written in the
Databricks Filesystem (DBFS).
Example
In the Access key and the Secret
key fields, enter the keys to be used to authenticate to
S3.
In the Bucket name field, enter the name of the bucket
and the folder in this bucket to be used to store the business data, for
example, mybucket/myfolder. This folder is created on the
fly if it does not exist but the bucket must already exist at runtime.
Did this page help you?
If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!