Skip to main content Skip to complementary content

Deploying the index cluster

Before you start to deploy Qlik Big Data Index to the Kubernetes cluster, make sure that you have completed the required preparations and have access to all required security information.

Installing Qlik Big Data Index

You install the Qlik Big Data Index Helm chart with the helm install command, providing a release name and the path to the Helm chart repository.

This will install Qlik Big Data Index using the default configuration from values.yaml located in the values folder. You need to adapt the deployment to your Kubernetes environment and storage provisioning, as described in the section Configuring the deployment.

Helm 2

helm install --name <release-name> <chart-repository-path> --set acceptLicense=true --set 'license.key=<license_key_here>' [-f <yaml-file>]

Example:  

helm install --name qabdi bt_qlik/bdi \ --set 'license.key=<license_key_here>' \ --set acceptLicense=true

Helm 3

helm install <release-name> <chart-repository-path> --set acceptLicense=true --set 'license.key=<license_key_here>' [-f <yaml-file>]

Example:  

helm install qabdi bt_qlik/bdi \ --set 'license.key=<license_key_here>' \ --set acceptLicense=true

Configuring the deployment

You can adapt the deployment to your Kubernetes environment. You can do this by overriding the default configuration in two ways. The examples are Helm 2 commands.

  • Providing one or more .yaml files containing alternative settings. These will override any settings in the values.yaml file.

    helm install --name bt_bdi qlik/bdi \ -f my_values.yaml
  • Using the --set flag to override a single setting. This will override any setting in the values.yaml file, or any other .yaml file supplied in the install.

    helm install --name bt_bdi qlik/bdi --set acceptLicense=true

Enabling data source input from a cloud shared storage

You can enable data source input from a cloud shared storage to make it easier to set up and tear down an index cluster and enhance performance. Source data is automatically copied to a local cache on all indexing nodes.

Enabling output to a cloud shared storage

You can enable data output to a local cache on all nodes via a cloud shared storage to enhance performance. Output data (indexlets, symbol tables and symbol positions) is automatically copied to all QSL worker nodes.

Secure communication

When you have deployed the index cluster, the communication is not encrypted by default. We recommend that you ensure that the communication between Qlik Sense and the index cluster is secure. Secure communication on connection level is mandatory in Qlik Sense Enterprise SaaS and Qlik Sense Enterprise on Kubernetes.

Affinity

You can add scheduling constraints for pods to avoid that symbol servers, indexer and QSL workers schedule on the same node.

Indexing settings

You can configure indexing settings (indexing_setting.json) directly from the chart.

Ingress

You can make the management console available on the port 80 of your qlik-nginx-ingress-controller node external IP by enabling nginx ingress.

Persistence

You need to configure persistence to manage file storage. There are three volumes that need to be configured:

  • config

    A shared volume that contains all configuration JSON.

  • data

    A shared volume that houses data that will be indexed by Qlik Big Data Index.

  • output

    A shared volume where all output data and additional configuration files will be stored during indexing.

You can create dynamic volumes or static volumes.

  • Dynamic volumes rely on a provisioner to dynamically create PersistentVolumes against a storage class. Dynamic volumes are best suited for storage that is not needed after the cluster is removed. Dynamic volumes can be configured by enabling persistence and providing a StorageClass (that supports dynamic provisioning).

  • Static volumes are manually created PersistentVolumes. Unlike dynamic volumes, they do not require a StorageClass but require configuration of a plug-in. Static volumes are best suited for pre-existing file systems where data already exists.

    Static volumes can be configured by enabling persistence and configuring the volume block.

Load balancer

You need to create a load balancer and attach its ports and external IP. with LoadBalancer.

Limiting QSL worker resources

You can set CPU limits for QSL workers to prevent them from starving other services on the node of resources. We recommend that you leave 2 vCPUs per node for other services.

Example: QSL worker CPU limits

If you have a node with 72 vCPUs available, and each node has 2 QSL workers, each QSL worker should be limited to 35 vCPUs. This leaves 2 vCPU available for other services.

qslworker:
  resources:
    requests:
      cpu: "1"
    limits:
      cpu: "35"

Enabling the license

You need to enable the license and add a license key when you install the Helm chart. You can do this in three optional ways.

  • With --set flags when installing the cluster:

    --set 'license.key=<license_key_here>'
  • Add the license settings to the values.yaml file in the Helm chart.

    license: ## ABDI License key. ## key: "<license_key_here>"
  • Create an additional .yaml file with the same license settings as above, and include the .yaml file when installing the cluster.

Accepting the Qlik User License Agreement (QULA)

You need to accept the Qlik User License Agreement (QULA) to be able to start indexing and QSL services. You can agree to the license by setting acceptLicense=true when installing the helm chart.

If you do not accept the license when installing the helm chart, you can accept it later by exporting ACCEPT_QULA=true.

Deleting a deployment

You can delete an existing deployment with the helm del command in Helm 2 or helm uninstall in Helm 3.

Warning noteThis will delete the deployment named qabdi, including all pods, volumes and associated data.

Helm 2

helm del --purge <release-name>

Example:  

$ helm del --purge qabdi

Helm 3

helm uninstall <release-name>

Example:  

$ helm uninstall qabdi

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!