Skip to main content Skip to complementary content

What's new?

This section describes the new and enhanced features in Replicate May 2024 and its service packs.

Information noteIn addition to these release notes, customers who are not upgrading from the latest GA version are advised to review the release notes for all versions released since their current version.

Customers should also review the Replicate release notes in Qlik Community for information about the following:

  • Migration and upgrade
  • End of life/support features
  • Deprecated versions
  • Resolved issues
  • Known issues

New endpoints and endpoint enhancements

Replicate May 2024 introduces new endpoints as well as significant enhancements to existing endpoints.

Support for replicating Always Encrypted columns from Microsoft SQL Server

A new Decrypt Always Encrypted columns option has been added to the Advanced tab of the Microsoft SQL Server source endpoint settings. When this option is selected and the correct values have been entered in the related Column master keys file and Column master keys password fields, Replicate will decrypt the column data and load it into the target table as plaintext.

For more information, see Setting advanced connection properties.

Improvements to Snowflake-based target endpoints

The following improvements apply to the Snowflake on Azure, Snowflake on AWS, and Snowflake on Google target endpoints.

Snowpipe Streaming support

This version introduces support for Snowpipe Streaming as a method for loading data into Snowflake. To facilitate this new functionality, a new Data Loading section has been added to the General tab of all Snowflake-based endpoints, with an option to choose Bulk Loading (the default loading method) or Snowpipe Streaming as the loading method. In addition, the staging settings, which previously had their own section, have been moved to the new Data Loading section and will only be shown when Bulk Loading is the selected loading method.

The main reasons to choose Snowpipe Streaming over Bulk Loading are: 

  • Less costly: As Snowpipe Streaming does not use the Snowflake warehouse, operating costs should be significantly cheaper, although this will depend on your specific use case.

  • Reduced latency: As the data is streamed directly to the target tables (as opposed to via staging), replication from source to target should be faster.

Performance enhancement when replicating to Snowflake targets

To improve performance when using the Batch Loading method, customers can now adjust the number of files in each batch that is loaded from the staging storage into Snowflake. To facilitate this new functionality, the Number of files to load in a batch and Batch load timeout (seconds) fields have been added to the Advanced tab in the Snowflake endpoint settings.

Key pair authentication support for Snowflake target endpoints

This version introduces support for using Key Pair authentication to access Snowflake targets.

Microsoft SQL Server (MS-CDC) enhancements

Timestamp and transaction ID preservation

In previous versions, when reading multiple transactions from Microsoft SQL Server (MS-CDC), Replicate would preserve the transaction ID and timestamp of the first transaction only. On the target, this gave the appearance of the records being part of a single transaction. Now, Replicate will preserve the original transaction ID and timestamp for each individual record. This benefits customers who wish to leverage the Transaction ID and Timestamp header columns in Change Tables as well as customers who wish to configure transformations with the Transaction ID and Timestamp variables.

Transaction ID bytes order

In previous versions, the bytes in the transaction ID were encoded in reverse order. From this version, the bytes will be encoded in the correct order.

Information noteCustomers who would rather preserve the existing behavior can do so using internal parameters. For details, please contact Qlik Support.

Expanded support for username and password replacement to Amazon S3 and Amazon Redshift

Many organizations prefer to keep secrets in a dedicated "vault" as a means of protecting against unauthorized privileged account access, impersonation, fraud, and theft. Storing secrets in a vault also eliminates manually intensive, time consuming and error prone administrative processes.

Replicate can now be configured to interface with such vaults when replicating to Amazon S3 or Amazon Redshift.

For more information, see Using external credentials

Expanded OAuth support to Databricks (Cloud Storage) and Databricks Lakehouse (Delta)

First introduced in Replicate November 2023 Service Release 1, customer can now connect to Databricks (Cloud Storage) or Databricks Lakehouse (Delta) targets using OAuth authentication.

Expanded Change Data Partitioning support to Databricks (Cloud Storage) with Unity Catalog

Customers replicating to Databricks (Cloud Storage) with Unity Catalog can now take advantage of the Change Data Partitioning feature. Note that when Change Data Partitioning is turned on, Replicate will not create actual partitions in Databricks. Instead, it will simulate partitions by copying the Change Tables data files to subfolders.

For more information on Change Data Partitioning, see Store Changes Settings

Support for caching SHA-2 pluggable authentication with MySQL endpoints

Replicate May 2024 introduces support for caching SHA-2 pluggable authentication (caching_sha2_password) when working with MySQL sources or targets, either on-premises or in the cloud. In previous versions, only Native Pluggable Authentication (mysql_native_password) was supported.

New AR_H_XACT_ID custom header

Customers can use the new AR_H_XACT_ID header in transformations. As opposed to the TRANSACTION_ID which is retrieved from the physical LDF TLOG file, the XACT ID is the transaction ID available during the MS SQL transaction itself. The ID is the Log Sequence Number (LSN) of the first record for the last distributed transaction of the server. All records in the transaction will have the same ID.

Information noteRelevant for Microsoft SQL Server, Microsoft Azure SQL Managed Instance, and Amazon RDS for SQL Server only.

For more information, see Headers

Upgrading customers with existing non-sysadmin users

Upgrading customers with existing non-sysadmin users who want to use the new AR_H_XACT_ID header need to do the following:

  1. Add [Xact Id] to the SELECT statements in the [attrep].[rtm_dump_dblog] procedure. See Step 6 in Setting up a non-system user in a standalone environment for instructions.
  2. Run:

    Use [master]

    Go

    ADD SIGNATURE

    TO [master].[attrep].[rtm_dump_dblog]

    BY CERTIFICATE [attrep_rtm_dump_dblog_cert]

    WITH PASSWORD = 'choose your own password';

Support for capturing changes from PostgreSQL partitioned tables

This version introduces support for capturing changes from PostgreSQL partitioned tables. To facilitate the new functionality, a new Support partitioned tables in CDC has been added to the Advanced tab of all PostgreSQL-based source endpoint settings. When this option is not selected (the default), in order to capture changes from a partitioned source table, you need to add all of the associated child tables to the task. This will create separate tables on the target for each child table (partition).

When this option is selected, only the partitioned table needs to be added to the task (without any child tables). In this case, for each partitioned table, a single non-partitioned table will be created on the target .

Information note
  • Requires PostgreSQL 13 or later
  • UPDATEs to a partitioned source table will be applied as INSERTs and DELETEs to the target table.
  • When this option is selected, the following DDLs are not supported:

    • Drop partition
    • Detach partition
    • Attach partition - with data

Support for accessing Amazon Kinesis data streams target via AWS PrivateLink

Customers can now select the new Use AWS PrivateLink option and specify the VPC Endpoint URL to connect to an Amazon VPC.

For more information, see Setting general connection properties

Support for replicating 4-byte emoji characters to Teradata

Customers can now replicate data with 4-byte emoji characters to Teradata target.

Performance improvements when capturing changes from SAP HANA

The SAP HANA source endpoint supports both trigger-based and log-based change capture. This version introduces a new Commit Timestamp (CTS) option when working in trigger-based mode. This is the recommended way of working as it offers better performance. If you are configuring the SAP HANA source endpoint for the first time, there is no reason at all to choose Log Table mode, which exists solely for backward compatibility. However, if you upgraded from a Replicate version configured with the deprecated Use log table option, Log Table mode will be automatically selected to prevent tasks from failing. You can then switch to Commit Timestamp (CTS) mode as described in the help.

For more information, see Setting advanced properties

Amazon Redshift target endpoint mapping changes

Starting from this version, Amazon Redshift BYTES and BLOB data types will be mapped to VARBYTE instead of VARCHAR.

Newly certified platforms, endpoints and versions

Support for Red Hat Enterprise Linux 9.x

First introduced in Replicate November 2023 Service Release 1, you can now install Replicate on Red Hat Enterprise Linux 9.x (64-bit).

Source endpoints

  • PostgreSQL 16.x
  • MongoDB 7.x
  • MySQL 8.1
  • IBM DB2 for z/OS 3.1

Target endpoints

  • PostgreSQL 16.x
  • MySQL 8.1
  • IBM DB2 for z/OS 3.1
  • Databricks 14.3 LTS

Drivers

  • SQL Server ODBC Driver 18.3

Newly certified sources

  • Teradata vantage - Supported via the Teradata source endpoint
  • Amazon RDS for SQL Server - Supported via the Microsoft SQL Server (CDC) source endpoint

Server-side enhancements

This section describes the Replicate server-side enhancements.

FIPS compliance

Replicate May 2024 re-introduces FIPS compliance which was not available in the last two Replicate versions. Previously, customers who required FIPS compliance needed to install a special FIPS-compliant Replicate kit. From this version, FIPS-compliance is part of the standard Replicate setup.

For more information on prerequisites and supported endpoints, see FIPS compliance

Control table enhancements

Customers can now configure Replicate to insert records to the attrep_status control table instead of updating existing records. This is especially useful for preventing table locks on targets such as Snowflake that limit the number of concurrent UPDATE operations on the same table. In addition, customers can now specify how often to update the attrep_status control table, a capability that already exists for the attrep_history table. To accommodate this new functionality, the Replication history time slot (minutes) option has been moved to the new Update Every column in the Control Tables Selection list.

For more information, see Control tables

Improved primary key management

Replicate relies on primary key columns (or indexes) defined in the target database to be able to correctly apply changes to the target tables.

In some cases, you might want to define additional primary key columns on the target table and arrange them in a specific order. For example, if the target table consolidates data from multiple sources, there might be a need to define additional columns as primary keys and to set them in a specific order. In the past, when replicating to an existing target table, Replicate would ignore the primary key columns defined in the task and apply changes using the columns in the target tables primary key. Now, Replicate will adhere to the table's primary key definition in the task, allowing you to allocate additional or different primary key columns on the target table (via a transformation) for use in the apply process. This improvement also allows customers to determine the order of the primary key columns in the target table, which might be needed for better performance.

Information noteWhen upgrading, to preserve the behavior of existing tasks (as opposed to new tasks), this improvement is turned off by default. To turn it on for existing tasks, after upgrading, open the task settings and either delete the use_manipulation_pk_for_apply parameter from the More Options tab or set the value to Off.

For more information on this feature, see Using the Transform tab

Log stream tasks

To perform INSERTs on the target when UPDATEs are either not possible (for example, due to a missing target record), or because the associated replication task is configured to use Batch Optimized Apply mode (in which case, DELETE + INSERT operations are performed), Replicate needs to retrieve all of the source table columns. To facilitate this new functionality, a new Retrieve all source columns on UPDATE option has been added to the task settings' Change Processing Tuning tab. This requires the source DBA to enable full logging (sometimes referred to as “supplemental logging”) on all of the source table columns.

For more information, see Change Processing Tuning.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!