Skip to main content Skip to complementary content

Upgrading and uninstalling Data Movement gateway

This topic explains how to upgrade or uninstall your Data Movement gateway installation. It also provides a table listing the changes introduced in each Data Movement gateway version.

Upgrade procedure

To verify your current version, go to Administration > Data gateways and check the Version column corresponding to your Data Movement gateway.

If there is a new version available, the version number is appended with !. You can hover on to get more information. If the installed gateway version is not supported, Status will be Deactivated, and an upgrade is required to activate the gateway.

Whenever a new version of the Data Movement gateway rpm becomes available, you should download it fromAdministration and upgrade the existing installation.

To do this:

  1. Download the new version by clicking on the gateway and then Upgrade.

    Acknowledge the customer agreement, and proceed to download the RPM.

  2. Open a shell prompt and change the working directory to the directory containing the RPM file.
  3. Run the following command:

    Syntax:

    rpm -U <rpm name>

    Example:

    sudo rpm -U qlik-data-gateway-data-movement.rpm

  4. Start the Data Movement gateway service:

    sudo systemctl start repagent

  5. Optionally, confirm that the service has started:

    sudo systemctl status repagent

    The status should be as follows:

    Active: active (running) since <timestamp> ago

Data Movement gateway version history

Version

Release date

Significant changes

End of support date

2024.5.28 November 12, 2024 Provides functionality required for the soon-to-be-released Schema Evolution feature. Determined when the next major version is released
2024.5.27 November 5, 2024

This release resolves the following issues:

  • When using redo event 11.22, missing INSERTs would occur when processing multiple INSERTs on a compressed page that was not compressed prior to the INSERTs.
  • In rare scenarios, incorrect parsing of DELETE evens in the redo log record would generate a "The Redo Log DELETE event contains an unknown structure" warning followed by various issues.

The instructions in the YAML file were updated to reflect the correct version of the SAP Java Connector.

When using Data Movement Gateway to connect to Snowflake target via a proxy, the connection would fail with the following error:

500 Failed to connect to Data Movement Gateway

Determined when the next major version is released
2024.5.22 October 15, 2024 This release resolves an issue in a Full Load + CDC replication task, where the Data task is updated to field for the CDC task would show the Full Load timestamp instead of the CDC timestamp(s). Determined when the next major version is released
2024.5.16 October 8, 2024
    • Missing INSERTs would sometimes occur in multiple INSERT operations when using redo event 11.22.
    • After upgrading Oracle 19g to the July 2024 patch, UPDATE operations would sometimes not be captured, and the following warning would be shown:

      A compressed row cannot be parsed

  • When the task settings were configured to create the control table schema, the task would fail with the following error:

    Failed to delete directory

  • When a task was scheduled to run periodically, it would sometimes fail with following error:

    The task stopped abnormally

  • Transformation and storage tasks would sometimes remain in a Queued state for an excessively long time.

  • Tasks would fail when using the use_manipulation_pk_for_apply feature flag with Store Changes replication.

  • Extended the S3 timeout to 6 hours to prevent issues resulting from prolonged timeouts such as losing the token to download the files.

Determined when the next major version is released
2024.5.14 September 10, 2024
  • Key/pair provides a more robust authentication method than user/password for connecting to Snowflake with your service accounts. This approach is recommended for workloads such as data loading (replication or landing tasks) and transformations.

  • In previous versions, refreshing the metadata on an existing dataset or a newly added dataset would sometimes fail with an error. This enhancement ensures that metadata can be retrieved from multiple tables in parallel without any issue.

  • When a source table contained a column with a DECIMAL data type - for example, DECIMAL (38, 20) - preparing the storage task on Google BigQuery would fail with the following error (excerpt):

    Column <n> in <table name> has incompatible types: STRING, BIGNUMERIC at [93:1]

    The issue was resolved by mapping the source DECIMAL data type to DECIMAL in Google BigQuery.

  • After making changes to an existing schema rule in a data task, the following error would occur:

    QRI SQL error not implemented

  • When preparing a landing task which connected to a SAP Application source, the task would complete successfully, but the following error would be reported in the repsrv.log log file:

    Invalid object name 'hk1./QTQVC/QRI'

Determined when the next major version is released
2024.5.7 August 6, 2024
  • Previously, when a metadata change occurred, all tables would be dropped and recreated even if the metadata change did not affect all tables. Now, only the changed tables will be dropped and recreated, thereby improving performance.

  • Tables created in the source database during the replication data task that match the include pattern, will now be automatically captured during CDC (Change data capture).

    • Previously, when reading multiple transactions from Microsoft SQL Server (MS-CDC), Qlik Talend Data Integration would preserve the transaction ID and timestamp of the first transaction only. On the target, this gave the appearance of the records being part of a single transaction. Now, Qlik Talend Data Integration will preserve the original transaction ID and timestamp for each individual record. This benefits customers who wish to leverage the Transaction ID and Timestamp header columns in Change tables.

    • Previously, the bytes in the transaction ID were encoded in reverse order. From this version, the bytes will be encoded in the correct order. Information Customers who would rather preserve the existing behavior can do so using internal parameters. For details, please contact Qlik Support.

  • Qlik Talend Data Integration now supports caching SHA-2 pluggable authentication (caching_sha2_password) when working with MySQL sources or targets, either on-premises or in the cloud. In previous versions, only Native Pluggable Authentication (mysql_native_password) was supported.

  • BYTES and BLOB data types will now be mapped to VARBYTE on Amazon Redshift instead of VARCHAR.

    • PostgreSQL 16.x

    • MySQL 8.1
    • IBM DB2 for z/OS 3.1
    • PostgreSQL 16.x
    • MySQL 8.1
    • Databricks 14.3 LTS
    • SQL Server ODBC Driver 18.3

Support for the following database versions has been discontinued:

  • All Oracle versions and drivers earlier than Oracle 19.x
  • Microsoft SQL Server 2014
  • MySQL 5.7
  • PostgreSQL 11
  • IBM DB2 for LUW 10.5
  • IBM DB2 for z/OS: z/OS 2.3
  • When a captured cluster document change deleted all rows of all its captured tables, missing DELETE operation and unnecessary assertion messages would be encountered.

  • Updated Microsoft Authentication Library for Java (MSAL4J) and Bouncy Castle to versions without known vulebrabilities.

    • The task would sometimes fail when using Snowflake internal storage.
    • The task would fail when the target schema name was Japanese Katakana.
  • When resuming a task with an Oracle source, the task would continue to wait for a deleted Archived Redo Log instead of failing with an appropriate error.

Determined when the next major version is released
2023.11.23 June 26, 2024
  • From this version, the Snowflake metadata schema (for staged files) will be created if it does not exist.

  • After a change was made to a Rename dataset rule (that concatenated the table name with the schema name) and a View was added to the landing task, the Prepare operation would fail with the following error:

    QRI SQL error not implemented.

  • Updated the java_file_factory component to a version without any known vulnerabilities.
  • Updated org.postgresql:postgresql to a version without any known vulnerabilities.
  • When moving data to SQL Server, the Prepare operation would fail when index names exceeded 128 characters. The issue was resolved by altering the logic to create shorter index names.
  • When moving data from SQL Server, a missing column name in the table definitions would cause an infinite notification loop in the repsrv.log file, with the following message:

    mssql_resolve_sqlserver_table_column_attributes(...) failed to find column

February 6, 2025

2023.11.11 May 21, 2024
  • Added support for concurrently retrieving the metadata for multiple tables.

  • The monitoring information for landing and replication tasks will now be updated every 10 seconds (instead of every 60 seconds), providing a more precise indication of their current status.

  • Japan is now supported as a Qlik Cloud tenant region.

  • The retry interval between the data gateway and Qlik Cloud would continually increase, but never be reset (unless the service was restarted).
  • When moving data from a SaaS application source, tables would sometimes enter an error state during reload.
  • Tasks with a MySQL source would sometimes fail with the following error during CDC:

    Read next binary log event failed; mariadb_rpl_fetch error 0 Error reading binary log.

  • Previously, CDC audit events would only be logged for landing tasks. Now, they will be logged for replication tasks ask well.
  • When moving data from SQL Server (MS-CDC), tasks with numerous tables would sometimes take several hours to start.
  • When the source table contained CLOB columns and the "Limit LOB size" value exceeded 10240, replication to Snowflake would fail with the following error:

    Invalid character length: 0

Determined when the next major version is released
2023.11.4 March 12, 2024

Customers can now install Data Movement gateway on Red Hat 9.x or on any corresponding and compatible Linux distribution.

Setting up Data Movement gateway

The commands for stopping, starting and checking the status of the Data Movement gateway service have changed.

Data Movement gateway service commands

  • The BOOLEAN data type, which was mapped to VARCHAR(1) in Amazon Redshift, will now be mapped to BOOLEAN.
  • The BYTES and BLOB data types, which were mapped to VARCHAR(1) in Amazon Redshift, will now be mapped to VARBINARY (length).

This section lists the newly supported databases, database versions, and driver versions.

  • Newly supported data source versions and editions

    The following data source versions are now supported:

    • Azure Database for MySQL - Flexible Server (Supported via the MySQL source connector)
    • MariaDB 10.4 - 10.11 (previously 10.4 and 10.5)
  • Newly supported target data platforms and editions

    The following data target versions are now supported:

    • Azure Database for MySQL - Flexible Server (Supported via the MySQL target connector)
    • Databricks: Databricks 13.3 LTS and Serverless SQL Warehouse
  • Newly supported SAP HANA driver version

    Customers with a SAP HANA source who want to install Data Movement gateway on Red Hat Linux 9.x, must install SAP HANA ODBC 64-bit Driver version 2.0.19 or later.

This section provides information about end-of-support database versions.

  • Support for the following data source versions has been discontinued:

    • Oracle 11.x
    • SAP HANA 1.0
  • Installing Data Movement gateway without providing a server password would not allow the tenant and proxy URLs to be configured in one command.
  • Moving data from a Salesforce (SaaS application) data source would print a large number of redundant warnings, thereby impacting data loading performance.
  • When retrieving changes for a SaaS application data source, if an error occurred retrieving changes for one of the tables, that table would be suspended and removed from the pipeline. Now, when encountering an error, the task will try to retrieve the changes up to three times before suspending the table.
Determined when the next major version is released
2023.5.16 January 9, 2024

We are continuing to expand the supported targets for the Replication project in Qlik Cloud Data Integration, in addition to Amazon S3, you can now choose Azure Data Lake Storage (ADLS) and Google Cloud Storage (GCS) for data lake delivery, as Parquet, JSON or CSV file formats.

  • The connection to the IBM DB2 for LUW would fail when the size of files needed for the connection (such as the SSL client certificate and the keystore file) exceeded 4 KB.

  • The DB2 driver installation would fail when using the driver installation utility.

September 7, 2024

2023.5.15 December 12, 2023

Microsoft Fabric joins the ever-expanding list of data warehouses that can be used as targets in data pipeline projects.

Updated the Snowflake driver version in the driver installation utility.

September 7, 2024

2023.5.10 October 31, 2023

A private connection can be used to ensure your data traffic remains secure and compliant. It simplifies both network management and security of your VPC (Virtual Private Cloud) without the need to open inbound firewalls ports, proxy devices or routing tables. Qlik Cloud Data Integration already supports private connections to Snowflake, Microsoft SQL Server and Amazon S3 data pipeline targets. With this release, customers can additionally utilize private connections when moving data to Databricks, Microsoft Azure Synapse, Google BigQuery, and Amazon Redshift.

  • All PostgreSQL-based data sources - Multirange data type support: The following multirange data types are now supported with all PostgreSQL-based data sources (on-premises and cloud).

    • INT4MULTIRANGE
    • INT8MULTIRANGE
    • NUMMULTIRANGE
    • TSMULTIRANGE
  • AWS Aurora Cloud for PostgreSQL data source - Non-superuser role support: The user specified in the PostgreSQL connector no longer needs to have the superuser role to move data from an AWS Aurora Cloud for PostgreSQL data source. This is especially useful for organizations with corporate security policies that prevent them from granting superuser access to non-privileged users.

Customers moving data from a Microsoft Azure SQL (MS-CDC) data source can now use a Geo Replica database.

Qlik Cloud Data Integration now supports reading data from Oracle encrypted tablespaces and encrypted columns during CDC.

Qlik Cloud Data Integration now supports tenants in DE and UK regions.

This section lists newly supported database and driver versions.

  • Newly supported data source versions. The following data source versions are now supported:

    • Microsoft SQL Server 2022
    • Oracle 21c
    • PostgreSQL 15.x
    • DB2 13.1 (when working with IBM DB2 for z/OS)
    • IBM DB2 for iSeries 7.5
  • Newly supported target data platform versions. The following data target versions are now supported:

    • Databricks (Cloud Storage): Databricks 12.2 LTS and Databricks SQL Serverless
  • Driver versions. The following ODBC driver versions are now supported:

    • IBM Data Server Client 11.5.8 for IBM DB2 for z/OZ and IBM DB2 for LUW
    • Simba ODBC driver 3.0.0.1001 for Google Cloud BigQuery
    • MySQL ODBC Unicode Driver 64-bit 8.0.32

Customers moving data to or from Microsoft SQL Server need to upgrade their SQL Server ODBC driver version to 18.x or later. Note that continuing to use SQL Server ODBC Driver 17.x might result in data errors. Upgrading the driver can be done using either the driver installation utility or manually. For instructions, see Driver setup.

This section provides information about end-of-support database versions.

  • End-of-support data source versions. Support for the following data source versions has been discontinued:

    • PostgreSQL 10
    • MySQL 5.6
    • MariaDB 10.3
  • End-of-support target data platform versions. Support for the following data target versions has been discontinued:

    • Databricks 9.1 LTS

September 7, 2024

2022.11.74

August 15, 2023

We take a cloud-first approach that enables rapid innovation and adoption. However, that does not mean that we are cloud-only. As part of our continued commitment to improving the long-term value for our customers, we are pleased to announce the release of a new use case for Replication when creating data projects. The new Replication use case is in addition to the existing ability to create data pipelines for all your data integration needs, such as Data Warehouse modernization.

The Replication project supports real-time data replication from supported data sources to a supported target.

Starting with this latest release, the first target that supports replicating data in real time, is Microsoft SQL Server. In addition to supporting Microsoft SQL Server on-premises, the following cloud providers are also supported:

  • On-premises
  • Amazon RDS
  • Google Cloud
  • Microsoft Azure (Microsoft Azure Managed Instance and Microsoft Azure Database)

Customers moving data to Microsoft Azure Synapse Analytics need to upgrade their SQL Server ODBC driver version to 18.x or later. Note that continuing to use SQL Server ODBC Driver 17.x might result in data errors. Upgrading the driver can be done using either the driver installation utility or manually. For instructions, see Driver setup.

A new Load data from source option was introduced, which allows customers to read their data directly from the source during Full Load, instead of using the cached data.

For more information on this option including use cases, see Landing settings.

Data Movement gateway 2022.11.74 includes updated CA certificates, which are needed for authenticating the Qlik Cloud tenant. The updated CA certificates also provide support for the Ireland and Frankfurt regions. Therefore, customers with Qlik Cloud tenants in Ireland or Frankfurt who want to use Qlik Cloud Data Integration must upgrade to this version.

Tasks landing data from an Oracle source would sometimes fail when a wide table contained unused or unsupported columns, or LOB columns that were not replicated.

April 31, 2024

2022.11.70

June 28, 2023

In previous versions, customers needed to run the "source arep_login.sh" command when installing SAP clients. From this version, it is no longer necessary to run this command.

This version includes updated CA certificates, which are needed for authenticating the Qlik Cloud tenant.

  • When a replication task on Data Movement gateway failed and recovered automatically, the recovered state would not be communicated to the Landing data asset in Qlik Cloud.

  • End-to-end encryption for Data Movement gateway would not be enabled by default, and was controlled by runtime flags.
September 15, 2023

2022.11.63

May 2, 2023

This version introduces a driver installation utility that eliminates the need to manually install and configure drivers. The new utility shortens the installation process while significantly reducing the possibility of user error. When the utility is run, the required driver is downloaded automatically, if possible, and installed. If the driver cannot be downloaded (DB2 drivers require login, for example), all you need to do is download the driver, copy it to a dedicated folder on the Data Movement gateway machine, and run the utility.

For an example of using the driver installation utility to install a PostgreSQL driver, see Prerequisites

The Snowflake connector now supports 4-byte emoji characters.

The PostgreSQL connector can now move data from Azure Database for PostgreSQL - Flexible Server.

The PostgreSQL connector can now move data from Cloud SQL for PostgreSQL.

This version introduces support for the following new data source versions:

  • PostgreSQL 14
  • DB2 (for IBM DB2 for z/OS) 12.1
  • IBM DB2 for z/OS 2.5

Databricks 11.3 LTS

The following data source versions are no longer supported:

  • DB2 (for IBM DB2 for z/OS) 11
  • PostgreSQL 9.6
  • Microsoft SQL Server 2012
  • MariaDB 10.2

This version resolves the following issues:

  • Oracle data source: When stopping and resuming a task, the task would sometimes fail with a “Failed to set stream position on context” error.
  • SAP Application source: Changes would not be captured during the landing task.

September 15, 2023

2022.5.13

October 19, 2022

Initial release

August 2, 2023

Uninstalling Data Movement gateway

To uninstall Data Movement gateway, run the following command:

rpm -e <installation package name>

Example:

rpm -e qlik-data-gateway-data-movement-2023.11-1.x86_64

If you do not know the package name, run:

rpm -qa | grep qlik

Warning noteUninstalling Data Movement gateway will cause all tasks currently using the data gateway to fail.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!