Landing data in a data lake with a Standard, Premium, or Enterprise subscription
You can set up a Land data in data lake task to land data to the following targets:
-
Amazon S3
For information on configuring a connection to your Amazon S3, see Amazon S3.
-
Azure Data Lake Storage
For information on configuring a connection to your Azure Data Lake Storage, see Azure Data Lake Storage.
-
Google Cloud Storage
For information on configuring a connection to your Google Cloud Storage, see Google Cloud Storage.
For information on configuring connections to your data sources, see Setting up connections to data sources
To set up a data lake landing task:
-
In Data Integration > Projects, click Create project.
-
In the New project dialog, do the following:
-
Provide a Name for your project.
- Select the Space in which you want the project to be created.
- Optionally, provide a Description.
- Select Replication as the Use case.
- Optionally, clear the Open check box if you want to create an empty project without configuring any settings.
-
Click Create.
One of the following will occur:
- If the Open check box in the New project dialog was selected (the default), the project will open.
- If you cleared the Open check box in the New project dialog, the project will be added to your list of projects. You can open the project later by selecting Open from the project's menu.
-
-
After the project opens, click Land data in data lake.
The Land data in data lake wizard opens.
-
In the General tab, specify a name and description for the data lake landing task. Then click Next.
Information noteNames containing slash (/) or backslash (\) characters are not supported. -
In the Select source connection tab, select a connection to the source data. You can optionally edit the connection settings by selecting Edit from the menu in the Actions column.
If you don't have a connection to the source data yet, you need to create one first, by clicking Create connection in the top right of the tab.
You can filter the list of connections using the filters on the left. Connections can be filtered according to source type, gateway, space, and owner. The All filters button above the connections list shows the number of current filters. You can use this button to close or open the Filters panel on the left. Currently active filters are also shown above the list of available connections.
You can also sort the list by selecting Last modified, Last created, or Alphabetical from the drop-down list on the right. Click the arrow to the right of the list to change the sorting order.
After you have selected a data source connection, optionally click Test connection in the top right of the tab(recommended), and then click Next.
-
In the Select datasets tab, select tables and/or views to include in the data lake landing task. You can also use wildcards and create selection rules as described in Selecting data from a database.
Information noteSchema names, or table names containing slash (/) or backslash (\) characters are not supported. -
In the Select target connection tab, select a target from the list of available connections and then click Next. In terms of functionality, the tab is the same as the Select source connection tab described earlier.
-
In the Settings tab, optionally change the following settings and then click Next.
Update method:
-
Change data capture (CDC): The data lake landing tasks starts with a full load (during which all of the selected tables are landed). The landed data is then kept up-to-date using CDC (Change Data Capture) technology.
Information noteCDC (Change Data Capture) of DDL operations is not supported.When working with Data Movement gateway, changes are captured from the source in near real-time. When working without Data Movement gateway, changes are captured according to the scheduler settings. For more information, see Scheduling tasks when working without Data Movement gateway.
- Reload: Performs a full load of the data from the selected source tables to the target platform and creates the target tables if necessary. The full load occurs automatically when the task is started, but can also be performed manually or scheduled to occur periodically as needed.
If you select Change data capture (CDC), and your data also contains tables that do not support CDC, or views, two data pipelines will be created. One pipeline with all tables supporting CDC, and another pipeline with all other tables and views using Reload.
Folder to use:
Select one of the following, according to which bucket folder you want the files to be written to:
- Default folder: The default folder format is <your-project-name>/<your-task-name>
- Root folder: The files will be written to the bucket directly.
-
Folder: Enter the folder name. The folder will be created during the data lake landing task if it does not exist.
Information note The folder name cannot include special characters (for example, @, #, !, and so on).
-
-
In the Summary tab, a visual of the data pipeline is displayed. Choose whether to Open the <name> task or Do nothing. Then click Create.
Depending on your choice, either the task will be opened or a list of projects will be displayed.
-
If you chose to open the task, the Datasets tab will show the structure and metadata of the selected data asset tables. This includes all explicitly listed tables as well as tables that match the selection rules.
If you want to add more tables from the data source, click Select source data.
-
Optional, change the task setting as described in Settings for cloud storage targets.
-
You can perform transformations on the datasets, filter data, or add columns.
For more information, see Managing datasets.
-
When you have added the transformations that you want, you can validate the datasets by clicking Validate datasets. If the validation fails, resolve the errors before proceeding.
For more information, see Validating and adjusting the datasets.
-
When you are ready, click Prepare to catalog the landing task and prepare it for execution.
-
When the data task has been prepared, click Run.
-
The data lake landing task should now start. You can monitor its progress in Monitor view. For more information, see Monitoring an individual data task
Setting load priority for datasets
You can control the load order of datasets in your data task by assigning a load priority to each dataset. This can be useful, for example, if you want to load smaller datasets before large datasets.
-
Click Load priority.
-
Select a load priority for each dataset.
The default load priority is Normal. Datasets will be loaded in the following order of priority:
-
Highest
-
Higher
-
High
-
Normal
-
Low
-
Lower
-
Lowest
Datasets with the same priority are loaded in no particular order.
-
-
Click OK.
Refreshing metadata
You can refresh the metadata in the task to align with changes in the metadata of the source in the Design view of a task. For SaaS applications using Metadata manager, Metadata manager must be refreshed before you can refresh metadata in the data task.
-
You can either:
-
Click ..., and then Refresh metadata to refresh metadata for all datasets in the task.
-
Click ... on a dataset in Datasets, and then Refresh metadata to refresh metadata for a single dataset.
You can view the status of the metadata refresh under Refresh metadata in the lower part of the screen. You can see when metadata was last refreshed by hovering the cursor on .
-
-
Prepare the data task to apply the changes.
When you have prepared the data task and the changes are applied, the changes are removed from Refresh metadata.
You must prepare storage tasks that consume this task to propagate the changes.
If a column is removed, a transformation with Null values is added to ensure that storage will not lose historical data.
Limitations
-
A rename with a dropped column before that, in the same time slot, will be translated into the dropped column rename if they have the same data type and data length.
Example:
Before: a b c d
After: a c1 d
In this example, b was dropped and c was renamed to c1, and b and c have same data type and data length.
This will be identified as a rename of b to c1 and a drop of c.
-
Last column rename is not recognized, even if the last column was dropped,and the one before it was renamed.
Example:
Before: a b c d
After: a b c1
In this example, d was dropped and c was renamed to c1.
This will be identified as a drop of c and d, and an add of c1.
-
New columns are assumed to be added at the end. If columns are added in the middle with the same data type as the next column, they may be interpreted as a drop and rename.
Limitations and considerations when landing data in a data lake
Transformations are subject to the following limitations:
- Transformations are not supported for columns with right-to-left languages.
-
Transformations cannot be performed on columns that contain special characters (e.g. #, \, /, -) in their name.
- The only supported transformation for LOB/CLOB data types is to drop the column on the target.
- Using a transformation to rename a column and then add a new column with the same name is not supported.
Changing nullability is not supported on columns that are moved, either changing it directly or using a transformation rule. However, new columns created in the task are nullable by default.