Skip to main content

Configuring NFS connector in Qlik Sense Enterprise on Kubernetes

Using the NFS connector you can access persistent volumes to bring data from NFS networked drive provisioners into Qlik Sense Enterprise on Kubernetes.

By default, a pre-configured NFS connector is added during the installation of Qlik Sense Enterprise on Kubernetes (QSEoK).

The chart deploys an NFS service data connector on a Kubernetes cluster using the Helm package manager.

See Kubernetes

See Helm

Note: Many of the code examples contain placeholder values that need to be replaced by your own values.

Applying the configuration to your cluster

Use Helm to apply the configuration in your values.yml file to your Kubernetes cluster:

$ helm upgrade \
  --install \
  qliksense qlik/qliksense \
  -f values.yml

To make sure that your configuration has been applied, you can run the get values command to see the resolved configuration:

$ helm get values qliksense

devMode:
  enabled: true
engine:
  acceptEULA: "yes"
identity-providers:
  secrets:
    idpConfigs:
      - discoveryUrl: "https://adfs-host/adfs/.well-known/openid-configuration"
        clientId: "https://adfs.elastic.example/1234567890"
        clientSecret: "<client secret>"
        realm: "ADFS"
        hostname: "adfs.elastic.example"
        useClaimsFromIdToken: true
        claimsMapping:
          sub: ["sub", "appid"]
          client_id: "appid"
          name: "display_name"
      - issuerConfig:
          issuer: https://the-issuer
        primary: false
        realm: "ADFS"
        hostname: "adfs.elastic.example"
        staticKeys:
        - kid: "thekid"
          pem: |-
            -----BEGIN PUBLIC KEY-----
            MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEsMSxQjXxrvqoKSAREQXsr5Q7+/aetjEb
            OUHt8/Cf73WD56cb4QbHthALl5Ej4MUFOAL9imDmVQe58o9b1j5Zo16Rt1gjLDvd
            nqstc+PX4tyxqGadItJAOU3jka7jYghA
            -----END PUBLIC KEY-----

Enabling the connector

A pre-configured NFS connector is added during the installation of Qlik Sense Enterprise on Kubernetes. By the default the configuration has the connector disabled. You can enable it using the values.yml file.

Do the following:

  1. Use Helm to apply the configuration in your values.yml file to your Kubernetes cluster.

  2. Change the following parameter from enabled: false to enabled: true.

    
    data-connector-nfs:
    	enabled: true

Specifying the NFS connection

When installing QSEoK you can specify your NFS connection as follows:

  • A parameter in the helm install command.
  • Referencing the connecting settings in a values.yaml and using this in the helm install command.

Referencing values.yaml

Create the values.yml file and include the settings you want to reference in the helm install command.

  • Set the devMode.enabled value to false to disable development mode.
  • Set the NFS.uri value with the connection string to NFS.

Example: values.yaml

engine:
  acceptEULA: "yes"

devMode:
  enabled: false
data-connector-nfs:
  enabled: true

identity-providers:
  secrets:
    idpConfigs:
      - <your IdP configuration here>

The values.yml file is then referenced in the helm install command:

helm upgrade \
  --install qliksense qlik/qliksense \
  -f values.yaml

Configuring the parameters

The following table lists the configurable parameters of the data-connector-nfs chart and their default values.

Chart parameters
Properties Description Default
global.imageRegistry The global image registry (overrides default image.registry). nil
image.registry The default registry where the repositories are pulled from. qliktech-docker.jfrog.io
image.repository Image name with no registry. data-connector-nfs
image.tag Image version 0.0.12
image.pullPolicy Image pull policy Always if image.tag is latest, else IfNotPresent.
imagePullSecrets A list of secret names for accessing private image registries. [{name: "artifactory-docker-secret"}]
replicaCount Number of replicas. 1
service.type The service type. ClusterIP
service.port data-connector-nfs listen port 8080
resources.requests.cpu CPU reservation 0.1
resources.requests.memory Memory reservation. 128Mi
resources.limits.cpu CPU limit 0.5
resources.limits.memory Memory limit. 512Mi
metrics.prometheus.enabled Enable annotations for prometheus scraping. true
configs.data.keysUri URI where the JWKS to validate JWTs is located. http://{{ .Release.Name }}-keys:8080/v1/keys/qlik.api.internal
secrets.stringData.tokenAuthPrivateKey The private key that corresponds to the JWKS in the authentication service. See values
secrets.stringData.tokenAuthPrivateKeyId The key ID that corresponds to the JWKS in the authentication service. zpiZ-klS65lfcq1K0-o29Sa0AAZYYr4ON_1VCtAbMEA
configs.data.tokenAuthUri The URI to the authentication service to get an internal token. http://{{ .Release.Name }}-edge-auth:8080
persistence.enabled Configure persistent volume claims for NFS connections. false
configs.data.spacesUri URI where the Spaces service is located. http://{.Release.Name}-spaces:6080
configs.data.pdsUri URI where the Policy Decisions service is located. http://{.Release.Name}-policy-decisions:5080
hpa.enabled Toggle horizontal pod autoscaler. false
hpa.minReplicas Min replicas for pod autoscaler. 3
hpa.maxReplicas Max replicas for pod autoscaler. 6
hpa.targetAverageUtilizationCpu CPU utilization target for pod autoscaler. 80

Specify each parameter using the --set key=value[,key=value] argument to helm install. Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart.

For example:

helm install --name my-release -f values.yaml qlik/data-connector-nfs

Configuring volume mounts and declaring NFS connections

Persistent volumes

Persistent volumes are created using standard Kubernetes manifests. The type of storage provisioners that are allowed are dependent on the Kubernetes environment that Qlik Sense Enterprise is being deployed to, as each environment has its own set of supported provisioners. Many Kubernetes environments have NFS, or similar networked drive, provisioners. The requirement, from the NFS connector side of things, is that and peristent volume must support either the ReadWriteMany or ReadOnlyMany access modes.

The following is an example of declaring a persistent volume using the docker 'hostpath' provisioner:


kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
	name: localstorage
provisioner: docker.io/hostpath
---
apiVersion: v1
kind: PersistentVolume
metadata:
	name: data-connector-nfs-test-pv
	labels:
		type: local
spec:
	storageClassName: localstorage
	capacity:
		storage: 5Gi
	accessModes:
		- ReadWriteMany
	hostPath:
		# This is the path to the directory on your laptop that you want
		# to give the NFS connector access to.
		path: "/some/data/directory"
		type: "Directory"

The manifest gets deployed using the kubectl apply -f manifest.yaml command.

Persistent volume claims

Persistent volume claims which bind to the previously declared peristent volumes are defined in the values.yaml override file used to configure the data-connector-nfs pods. The claims are defined in the persistence section of the values.yaml file.

Here is an example of declaring two persistent volume claims:


data-connector-nfs:
	persistence:
		enabled: true
		pvc-1:
			storageClass: localstorage
			accessMode: ReadWriteMany
			size: 5Gi
		pvc-2:
			storageClass: localstorage
			accessMode: ReadOnlyMany
			size: 5Gi

The claim names (in this example, pvc-1, pvc-2) can be whatever you want them to be. What is important is that the properties declared for these PVCs (storageClass, accessMode, etc.) match to properties in the persistent volumes that were declared previously. This property matching is used by Kubernetes to bind the PVCs to the existing PVs.

Volume mounts

Next, the PVCs declared in the previous section need to get mounted into the data-connector-nfs pod containers. This is also done in the values.yaml override file in the deployment section

Here is an example of declaring 2 volume mounts which mount in the above 2 PVCs:


data-connector-nfs:
	deployment: 
		container: 
			volumeMounts: 
				pvc-1:
					mountPath: /tmp/MyReadWriteDir/
				pvc-2:
					readOnly: true
					mountPath: /tmp/MyReadOnlyDir/

Here, the volume mount names (pvc-1, pvc-2) must match the PVC names declared in the previous section. The mountPath can be any directory, but using subdirectories of /tmp/ is a reasonable convetion to use. Declaring a volume mount as readOnly will allow the connector to enforce the read-only-ness of this directory.

NFS connection declarations

The NFS connections are defined at deployment time. IE, users do not create these connections. Each connection is declared to live in a shared space. The permissons on the shared space define the access that users have to the NFS connections within that space. NFS connections are also defined in the values.yaml override file in the configs section.

Here is an example of declaring two NFS connections which provide access to the volume mounts defined in the previous section:


data-connector-nfs:
	configs:
		data:
			nfsConnections_0_Name: "NfsConnection1"
			nfsConnections_0_Path: "/tmp/MyReadWriteDir"
			nfsConnections_0_SpaceId: "5e5422dcb6dfec00014ffaea"
			nfsConnections_1_Name: "NfsConnection2"
			nfsConnections_1_Path: "/tmp/MyReadOnlyDir/SomeSubDir"
			nfsConnections_1_SpaceId: "5e5422dcb6dfec00014ffaea"

Each connection is defined by 3 properties: Name, Path, and SpaceId. The space ID defines which shared space that the NFS connection lives in. In this example, both NFS connections reside in the same space. The Path must be the root directory of one of the volume mounts declared in the previous section, or a subdirectory of one of those root directories. It is possible to define multiple NFS connections which point to multiple subdirectories within the same volume mount root directory. The 3 properties for each NFS connection declaration are grouped together using an ascending numerical index for each connection that is declared (in this example, 0 & 1).

Default values of the sub chart

Below is the sub chart section of the data connector NFS containing all the default values:

Note: Sections of the code have been commented out.

## data-connector-nfs contains the configurations for the data-connector-nfs sub-chart
data-connector-nfs:
	enabled: false
	deployment:
		replicas: 1
		container:
			## data-connector-nfs resource limits
			##
			resources:
				limits:
					cpu: null
					memory: null
				requests:
					cpu: null
					memory: null
			## Volume mounts are defined here.  Each one must map to some base
			## directory in the NFS connection definitions above.  Multiple
			## volume mounts can be defined.  The name of each volume mount must
			## match the name of a persistence declaration in the next
			## section.  The volume mountes will be read/write by default,
			## but you can enforce them as read-only by setting the readOnly
			## property to true.
			# volumeMounts:
				# pvc-1:
					# readOnly: false
					# mountPath: /tmp/MountedVolume1Dir/
			## Define NFS connections here.  Each connection must be associated
			## with a shared space, and must also be associated with a volume
			## mount.  Multiple connections can be defined (each with a set of
			## properties with and index starting a 0)
	# configs:
		# data:
			# nfsConnections_0_Name: "NfsConnection1"
			# nfsConnections_0_Path: "/tmp/MountedVolume1Dir"
			# nfsConnections_0_SpaceId: [Some shared space ID]

		## The persistent volume claims are defined here.  They need to
		## match (based on the properties) persistent volumes that were
		## declared elsewhere.  More than one persistent volume claim can be
		## defined here.
	# persistence:
		# enabled: true
		# persistentVolumeClaim:
			# pvc-1:
				# storageClass: localstorage
				# accessMode: ReadWriteMany
				# size: 5Gi

Update DCaaS config to make sure it recognizes the NFS connector

The DCaaS service may not be recognizing the data connector NFS service by default. If this is the case, you will need to update its configuration using a values.yml override file with an updated env section.

Here is an example of an env section that updates DCaaS in Qlik Sense Enterprise for elastic deployments:


dcaas:
	env:
		connector_service: "{{ .Release.Name }}-data-connector-rest-rld:{{ .Release.Name }}-data-connector-rest-cmd:50060 {{ .Release.Name }}-data-connector-qwc-rld:{{ .Release.Name }}-data-connector-qwc-cmd:50060 {{ .Release.Name }}-data-connector-odbc-rld:{{ .Release.Name }}-data-connector-odbc-cmd:50060 {{ .Release.Name }}-data-connector-sap-sql-rld:{{ .Release.Name }}-data-connector-sap-sql-cmd:50060 {{ .Release.Name }}-qix-datafiles:50051 {{ .Release.Name }}-data-connector-nfs:50051"

It can be seen that a pointer to data connector NFS running on port 50051 was added to the end of the connector_service list.

DCaaS also requires that a data connector NFS feature flag is enabled in order to recognize the connector. In Qlik Sense Enterprise for elastic deployments, this feature flag would be enabled as follows:


feature-flags:
	configmaps:
		create: true
		featureFlagsConfig:
			{
				"globalFeatures": {
					"data-connector-nfs": true,
					...
				}
			}

Redeploying a Helm chart

In order for the changes to the values.yml file to be applied, the data connector NFS and DCaaS Helm charts (as part of the Qlik Sense Enterprise on Kuberneteschart-of-charts) need to be redeployed using the helm upgrade --install command.

To install the chart with a release name my-release:

helm install --name my-release qlik/data-connector-nfs

The command will deploy the declared NFS connections on the Kubernetes cluster with the upgraded configuration.