Skip to main content Skip to complementary content

What's new?

This section describes the new and enhanced features in Replicate November 2024 and its service packs.

Information noteIn addition to these release notes, customers who are not upgrading from the latest GA version are advised to review the release notes for all versions released since their current version.

Customers should also review the Replicate release notes in Qlik Community for information about the following:

  • Migration and upgrade
  • End of life/support features
  • Deprecated versions
  • Resolved issues
  • Known issues

New and enhanced endpoints

New SAP OData source endpoint

In this release, we are introducing a new endpoint for customers to use when sourcing data from SAP applications. Recently, SAP has implemented restrictions on their customer in the technologies that can be used to source and extract data from SAP applications. Many of these techniques are supported by Qlik solutions and are now moving out of favor with our customers because of these changes. While Qlik will continue to support all existing source endpoints, we have introduced an “SAP compliant” endpoint based on their direction for customers and ISVs. This new SAP OData source endpoint will use a secured web service connection to extract data from SAP applications.

See also: Using SAP OData as a source

New Oracle XStream source endpoint

We are pleased to announce the new Oracle XStream source endpoint, which interfaces directly with the Oracle XStream API. The new endpoint offers several improvements for extracting data from the Oracle redo logs such as better performance, increased reliability, simplified maintenance, and future-proofing with later Oracle versions.

See also: Using Oracle XStream as a source

IBM DB2 for z/OS target endpoint enhancements

This version introduces support for:

  • Specifying a High-level qualifier (HLQ) to be the first segment of the target dataset names
  • Allocating space for the target datasets
  • Overriding the default z/OS database
  • Connecting to the DB2 database server via SSL

See also: Using IBM DB2 for z/OS as a target

Databricks target endpoint features and changes

Support for Databricks Volumes as a staging area

When the Databricks (Delta) target endpoint is configured to create tables in Unity Catalog, it is now possible to stage the files on a Databricks Volume. Using a Volume for staging is a convenient alternative to other staging methods as it does not require Replicate to access external storage (such as an Amazon S3 bucket).

See also: Prerequisites and Setting general connection properties.

Required driver version when using Databricks (Delta) or Databricks (Storage) target endpoints

When using Databricks (Delta) or Databricks (Storage) target endpoints, Simba Spark ODBC Driver 2.8.2 or later is now required.

Microsoft Azure Database for PostgreSQL target endpoint: New authentication methods

This version introduces supported for the following authentication methods:

  • Azure Active Directory Service Principal
  • Azure Active Directory Managed Identity

See also: Using Microsoft Azure Database for PostgreSQL as a target

Snowflake-based target endpoints: Snowpipe streaming enhancements

This version introduces the following enhancements when using the Snowpipe Streaming loading method:

  • Support for OAuth authentication
  • Support for replicating 4-byte emoji characters
  • Support for replicating character data types with NULL values in the string
  • Changing the proxy settings no longer requires the Replicate services to be restarted

PostgreSQL failover certification

Working with a secondary database after failover has been certified for the following PostgreSQL-based source endpoints:

  • Google Cloud SQL for PostgreSQL
  • Amazon RDS for PostgreSQL
  • PostgreSQL (on-premises)

See also: Setting up failover (the procedure is identical for all supported endpoints)

MySQL performance improvement

In previous versions, when using a MySQL-based source endpoint in a task that was configured with limited LOB size, Replicate would use source lookup to read the LOB columns. Now, Replicate will read the LOB columns directly from binlog, thereby improving performance.

Information noteThis improvement does not apply to the JSON data type.

Data type support and mapping changes

Mapping changes

From Replicate November 2024, mappings to LOB columns have been changed for Amazon Redshift and Snowflake-based target endpoints.

  • BLOB is now mapped to VARBYTE(16777216)
  • NCLOB is now mapped to NVARCHAR(65535)
  • CLOB is now mapped to NVARCHAR(65535)
  • BLOB is now mapped to BINARY (8388608)
  • NCLOB is now mapped to NVARCHAR(16777216)
  • CLOB is now mapped to VARCHAR(16777216)

Newly supported data types

  • Now supports the BOOLEAN data type (from DB2 for LUW 11.5).

  • Now supports the BIGNUMERIC data type.

  • Now supports the anyType data type.

Expanded target support for the DDL History control table

The DDL History control table is now supported with the following target endpoints:

  • Amazon Kinesis Data Streams
  • Amazon MSK
  • Amazon Redshift
  • Amazon S3
  • File
  • Google Cloud Storage
  • Kafka
  • Microsoft Azure ADLS
  • Microsoft Azure Event Hubs
  • Snowflake on AWS
  • Snowflake on Azure

See also: DDL history

Support for using a non-superuser with Google Cloud SQL for PostgreSQL

From this version, it is now possible to specify a non-superuser account when replicating from Google Cloud SQL for PostgreSQL

See also: Using an account without the "superuser" role

Newly supported database versions

  • Databricks (Cloud Storage) 15.4 LTS
  • Databricks Lakehouse (Delta) 15.4 LTS
  • Oracle 23ai source and target

    Information note
    • Oracle 23ai source is supported with TDE encryption only.
    • Oracle 23ai source and target are certified with Oracle Standard Edition only.

Server-side enhancements

This section describes the Replicate server-side enhancements.

Scheduling enhancements

This version introduces two new scheduling options:

  • Monthly: Lets you schedule tasks to run:
    • On the <nth> day of every month, and at the specified time

      -OR-

    • On the <nth> <weekday> of every month, and at the specified time
  • Every: Lets you schedule tasks to run at regular intervals, starting on a specific date and time.

See also: Scheduling jobs

Support for turning off FIPS mode in Replicate

With a standard installation, if the machine on which Replicate is installed is running in FIPS mode, Replicate will also be installed in FIPS mode. However, if you need to use endpoints that are not supported when Replicate is running in FIPS mode, then you can turn off FIPS mode in Replicate.

See also: Turning off FIPS mode in Replicate

Enhancements when installing Replicate on Linux

The following changes provide useful information and offer greater control when installing Replicate on Linux.

Creating the default or specified user without a login shell

During installation, the RPM creates the (default or specified) user with the default login shell. The user can now be created using the “nologin” command (assuming it exists on the system), by specifying nologin=true when installing the RPM.

Verify the installation package

The verify command can be used to compare information about the installed files in the package with information about the files taken from the package metadata stored in the RPM database. Among other things, verifying compares the size, digest, permissions, type, owner and group of each file. Any discrepancies are displayed.

Reviewing the changelog

The changelog command allows you to review the changelog for the new version.

Replacing /etc/init.d with systemd

Replicate now leverages systemd to create and manage Replicate services, replacing the old /etc/init.d mechanism.

For systems without systemd, RPM installation will fail. A passive installation of Replicate is possible (install files, no services created, no processes run), by specifying systemd=no when installing the Replicate.

Information noteA passive installation cannot be upgraded

See also: Linux installation prerequisites and procedures

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!