Skip to main content

Viewing logs in Qlik Sense Enterprise on Kubernetes

All services in Qlik Sense Enterprise on Kubernetes emit log data that can be used for debugging issues and activity. Logs can be read on demand or they can be collated and pushed to a log aggregation product for further analysis and use.

Viewing service logs

To inspect the recent logs of a service, for example to debug an issue, the Kubernetes CLI (or other Kubernetes management tools) can be used to quickly view log data.

The following assumes you have the kubectl tool installed and connected to your Kubernetes cluster.

Run the following to get a list of all the services running, this will also list if any services are reporting themselves as having issues.

Kubectl get pods

Identify the service you want to inspect the logs for from the list and run the following adjusting as needed.

Kubectl log qliksense-engine-xxxxxxx

This will render the recent log entries to the console in JSON format.

If a pod is not running, for example it is in a pending state, then it may not issue any log entries. You can use the following command to see what issue Kubernetes is reporting with that pods configuration:

kubectl describe pod qliksense-engine-XXXXX

There are two common reasons for a pod to not start:

  • Wrong storage configuration - this will report issues about the availability of its volume claims.
  • Insufficient resources - depending on the Kubernetes provider there can be insufficient resources or a limitation on how many pods can run on a node. In this instance it will report errors about pods being “unschedulable”

Collating and forwarding logs

The logs produced can be forwarded to be gathered, stored, searched and viewed all the system logs on mass in log aggregation tools.

Below is an example of using 3rd party tools including:

  • Gathering your system logs in fluentd
  • Storing your log files in Elasticsearch

    Information note Elasticsearch requires a significant amount of resources and is therefore not recommended to be executed on your local machine unless your Kubernetes cluster has a lot of available memory and CPU.
  • Consuming your log files in Kibana

Installing Elasticsearch

Elasticsearch is a search engine that provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

In this example we install a minimum setup of Elasticsearch, that does not include any persistence.

  1. Create a file named elasticsearch.yaml to configure your installation preferences, and add the following:

      tag: "6.1.4"
      replicas: 1
          cpu: "0.5"
          memory: "1024Mi"  ## not setting a limit here can take down the cluster using all available memory
        # requests:   # use defaults
        #   cpu: "25m"
        #   memory: "512Mi"
        enabled: false
      replicas: 2
      # heapSize: "512m"    ## use default, should be less than request, MUST be less than limit
          cpu: "0.5"
          memory: "1024Mi"  ## set a limit
        # requests:   # use defaults
        #   cpu: "25m"
        #   memory: "512Mi"
        enabled: false
      replicas: 1
      heapSize: "512m"
          cpu: "0.5"
          memory: "1024Mi"
          cpu: "25m"
          memory: "512Mi"
  2. Run the following command to install Elasticsearch:

    helm upgrade --install elasticsearch incubator/elasticsearch -f ./elasticsearch.yaml

Installing fluentd

Fluentd is an open source data collector for unified logging layer. It allows you to unify data collection and consumption for a better use and understanding of data. Follow these steps to install fluentd.

  1. Create a file named fluentd.yaml to configure your installation preferences, and add the following:

      host: elasticsearch-elasticsearch-client
  2. Run the following command to install fluentd:

    helm upgrade --install fluentd incubator/fluentd-elasticsearch -f fluentd.yaml

Installing Kibana

Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. You can use it to view and search your logs. Follow these steps to install Kibana.

  1. Create a file named kibana.yaml to configure your installation preferences, and add the following:

      ELASTICSEARCH_URL: http://elasticsearch-elasticsearch-client:9200
  2. Run the following command to install Kibana:

    helm upgrade --install kibana stable/kibana -f kibana.yaml

Accessing Kibana

Run the following command to access Kibana:

export POD_NAME=$(kubectl get pods --namespace default -l "app=kibana,release=kibana" -o jsonpath="{.items[0]}")
echo "Visit to access Kibana"
kubectl port-forward $POD_NAME 5601

In Kibana you can run the following query to test your setup: