Skip to main content Skip to complementary content

What's new?

This section describes the new and enhanced features from Replicate May 2023 initial release until and including Replicate May 2023 SR01.

Information noteIn addition to these release notes, customers who are not upgrading from the latest GA version (Replicate May 2023) are advised to review the release notes for all versions released since their current version.

Customers should also review the Replicate release notes in Qlik Community for information about the following:

  • Migration and upgrade
  • End of life/support features
  • Newly supported versions and third-party software
  • Resolved issues
  • Known issues

New endpoints and endpoint enhancements in Replicate May 2023 service release 01

This section describes the new and enhanced endpoint features.

New Confluent Cloud target endpoint

This version introduces support for replicating data from any supported source to Confluent Cloud.

Using Confluent Cloud as a target

New authentication option for Kafka target

When publishing to the Confluent Schema Registry, you can now choose to authenticate using both a certificate and a user name and password.

Schema Registry connection properties

Endpoints and endpoint enhancements first introduced in Replicate May 2023 initial release

New Google Cloud Pub/Sub target endpoint

This version introduces support for replicating data from any supported source to Google Cloud Pub/Sub.

Using Google Cloud Pub/Sub as a target

New IBM DB2 for z/OS target endpoint

This version introduces support for replicating data from any supported source to IBM DB2 for z/OS.

Using IBM DB2 for z/OS as a target

New Google Cloud AlloyDB for PostgreSQL endpoints

Google Cloud AlloyDB for PostgreSQL is now supported as a source endpoint and as a target endpoint.

Using Google Cloud AlloyDB for PostgreSQL as a source

Using Google Cloud AlloyDB for PostgreSQL as a target

SAP ODP source endpoint enhancements

SLT support

This version introduces support for replicating data from a SAP Landscape Transformation Replication Server. To facilitate this functionality, two new options have been added to the Advanced tab of the SAP ODP endpoint settings:

  • A new ODP context has been added: SAP LT Replication (SLT)
  • SLT alias: This field is shown when SAP LT Replication (SLT) is selected and must match the alias defined in the SAP configuration created in SAP LT Replication Server Cockpit (transaction LTRC).

Delta processing enhancements

This version introduces greater control over delta processing. To facilitate this, the following options have been added to the Advanced tab of the SAP ODP endpoint settings:

  • History data - This is the default mode. When this mode is selected, all data will be applied as INSERTS, thereby preserving previous record versions. When working in this mode, the following options are also available:

    • Apply original primary key - Retrieve the primary key settings from the ODP metadata.
    • Reverse summable fields - Reverse the values of summable fields. Before-image values have reversed signs. Enabling this setting will restore the sign to its prior state and recalculate the original value.
  • Current data - When this mode is selected, the actual change operation (INSERT, UPDATE, or DELETE) will be performed on the target.

Using SAP ODP as a source

IBM DB2 for z/OS source endpoint - data server client support

In previous versions, customers running Replicate on Linux needed to install the full ODBC client package to work with the IBM DB2 for z/OS source endpoint. From this version, installing just the data server client (which requires less disk space) is also supported.

Using IBM DB2 for z/OS as a source

AWS Aurora Cloud for PostgreSQL source endpoint: non-superuser support

It is now possible to work with the AWS Aurora Cloud for PostgreSQL source endpoint without the superuser role.

Using an account without the "superuser" role

Log Stream tasks

Log Stream tasks now support the source change position (e.g. SCN or LSN) Advanced Run option with the Oracle source endpoint.

Using the Log Stream

Endpoint proxy server enhancements

From this version, you can now choose whether to use the default proxy server configured in the server settings, or provide proxy settings specific to the endpoint. Supported with the following endpoints only:

  • Databricks Lakehouse (Delta)
  • Databricks Cloud Storage
  • Google Cloud Big Query
  • Amazon Redshift
  • Microsoft Azure Synapse Analytics

See also: Default proxy server

Microsoft Azure Data Lake Storage (ADLS) Gen2

For target endpoints that offer the option of using Microsoft Azure Data Lake Storage (ADLS) Gen2 storage, it is now possible to choose whether to use a proxy server to access the Staging storage and Azure Active Directory or either one of them.

Amazon Redshift target endpoint - AWS PrivateLink support

Customers using the Amazon Redshift target endpoint can use AWS PrivateLink to connect to a virtual private cloud (VPC). To enable this functionality, a Use AWS PrivateLink option and VPC Endpoint URL field have been added to the General tab of the Amazon Redshift target endpoint settings.

Using Amazon Redshift as a target

Support for Unity Catalog with Databricks endpoints

The version introduces support for Unity Catalog when using the Databricks Cloud Storage or Databricks Lakehouse (Delta) target endpoints.

Using Databricks Lakehouse (Delta) as a target

Using Databricks (Cloud Storage) as a target

Data type enhancements

Change to parquet data type mapping

The data type mapping for Parquet format has been changed for the following endpoints:

  • Amazon S3
  • Microsoft Azure ADLS
  • Google Cloud Storage

The BYTES data type, which was previously mapped to FIXED_LEN_BYTE_ARRAY ($LENGTH) will now be mapped to BYTE_ARRAY.

Information noteThis change only affects newly created endpoints. Existing endpoints will continue to use the old mapping.

Multirange data type support with PostgreSQL-based source endpoints

The following multirange data types are now supported with all PostgreSQL-based source endpoints.

  • INT4MULTIRANGE
  • INT8MULTIRANGE
  • NUMMULTIRANGE
  • TSMULTIRANGE

Extended Parallel Load support

Data can now be loaded to the Kafka and Amazon MSK target endpoints using Parallel Load. In Full Load replication mode, you can use Parallel Load to accelerate the replication of large tables by splitting the table into segments and loading the segments in parallel. Tables can be segmented by data ranges, by partitions, or by sub-partitions.

Parallel Load

Default quote character in file-based endpoints

New endpoints will be created with double-quotes (") as the default quote character instead of an empty value. The change applies to the following target endpoints: Amazon S3, File, Microsoft ADLS, and Google Cloud Storage.

Renamed IBM Netezza target endpoint

The IBM Netezza target endpoint has been renamed to Netezza Performance Server (NPS).

Microsoft Azure SQL (MS-CDC) endpoint - Geo Replica support

This version introduces support for reading events from Geo Replica.

Oracle HSM (Hardware Security Module) support

This version introduces support for reading data from both encrypted tablespaces and encrypted columns during CDC. This is true for both Replicate Log Reader and Oracle LogMiner to access the redo logs.

Supported encryption methods

What's new on the server side?

This section describes the new and enhanced server-side features.

Default proxy server

Instead of needing to configure a proxy server for each endpoint, you can now configure a default proxy server which can be used by all endpoints. To facilitate this functionality, a new Default proxy server section has been added to the server settings' Endpoints tab. Supported with the following endpoints only:

  • Databricks Lakehouse (Delta)
  • Databricks Cloud Storage
  • Google Cloud Big Query
  • Amazon Redshift
  • Microsoft Azure Synapse Analytics

Endpoints

Support special characters in column names used in expressions

The new Support special characters in column names used in expressions option can be set globally for all tasks or individually for a specific task. The option is located in the server and task settings' Transformations and Filters tab. Enable the option to include source column names with special characters in expressions defined for this task. An example of such a column name would be special#column.

Warning noteBefore enabling this option, make sure to read the full description of this feature in the associated help topic. Failure to do so might result in data corruption.

Transformations and Filters

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!