Skip to main content
The Qlik February 2020 release has been delayed by a couple of weeks due to a security vulnerability that recently surfaced with Node.js, which is impacting a large portion of the software industry. The release is expected to be delivered before the end of the month, with a target date of February 25th, as Qlik is actively running tests to ensure that the February GA release is stable and not affected by this security malware.
To read more about this vulnerability:
QlikWorld 2020 Global Conference. Keynote speaker Malcolm Gladwell. Register now and save

Deploy an index cluster

Qlik Associative Big Data Index is deployed in a Virtual Private Cloud (VPC) using Kubernetes. You can deploy the cluster in a number of Kubernetes environments:

  • Amazon EKS (Amazon Elastic Container Service for Kubernetes)
  • Microsoft Azure
  • Google Cloud Platform
  • Non-managed Kubernetes environments

Deploy the environment in the following order

Deployment architecture

Understand the Qlik Associative Big Data Index architecture, and the different pod types..

System requirements for Qlik Associative Big Data Index deployment

Make sure that your environment fulfills the system requirements. You need access to the following hardware, software and information to be able to prepare the environment for deployment.

Preparing an index cluster deployment

You need to make some preparations before you can deploy the Qlik Associative Big Data Index cluster with Helm.

Deploying the index cluster

Start the deployment by installing the Qlik Associative Big Data Index Helm chart.

Create a secure communication path

Secure communication between Qlik Sense and the index cluster

When you have deployed the index cluster, the communication is not encrypted by default. We recommend that you ensure that the communication between Qlik Sense and the index cluster is secure.

Next step after the deployment

Index the data

The index is the main element of Qlik Associative Big Data Index. Before you can index the data, you need to prepare the data using a schema, and connect to the data with a specific connector.