Viewing logs in Qlik Sense Enterprise for elastic deployments

All services in Qlik Sense Enterprise for elastic deployments emit log data that can be used for debugging issues and activity. Logs can be read on demand or they can be collated and pushed to a log aggregation products for further analysis and use.

Viewing service logs

To inspect the recent logs of a service, for example to debug an issue, the Kubernetes CLI (or other Kubernetes management tools) can be used to quickly view log data.

The following assumes you have the kubectl tool installed and connected to your Kubernetes cluster.

Run the following to get a list of all the services running, this will also list if any services are reporting themselves as having issues.

Kubectl get pods

Identify the service you want to inspect the logs for from the list and run the following adjusting as needed.

Kubectl log qsefe-engine-dhwksfhf

This will render the recent log entries to the console in JSON format.

Collating and forwarding logs

The logs produced can be forwarded to be gathered, stored, searched and viewed all the system logs on mass in log aggregation tools.

Below is an example of using 3rd party tools including:

  • Gathering your system logs in fluentd
  • Storing your log files in Elasticsearch

    Примечание: Elasticsearch requires a significant amount of resources and is therefore not recommended to be executed on your local machine unless your Kubernetes cluster has a lot of available memory and CPU.
  • Consuming your log files in Kibana

Installing Elasticsearch

Elasticsearch is a search engine that provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

In this example we install a minimum setup of Elasticsearch, that does not include any persistence.

  1. Create a file named elasticsearch.yaml to configure your installation preferences, and add the following:

    image: tag: "6.1.4" client: replicas: 1 resources: limits: cpu: "0.5" memory: "1024Mi" ## not setting a limit here can take down the cluster using all available memory # requests: # use defaults # cpu: "25m" # memory: "512Mi" master: persistence: enabled: false replicas: 2 # heapSize: "512m" ## use default, should be less than request, MUST be less than limit resources: limits: cpu: "0.5" memory: "1024Mi" ## set a limit # requests: # use defaults # cpu: "25m" # memory: "512Mi" data: persistence: enabled: false replicas: 1 heapSize: "512m" resources: limits: cpu: "0.5" memory: "1024Mi" requests: cpu: "25m" memory: "512Mi"
  2. Run the following command to install Elasticsearch:

    helm upgrade --install elasticsearch incubator/elasticsearch -f ./elasticsearch.yaml

Installing fluentd

Fluentd is an open source data collector for unified logging layer. It allows you to unify data collection and consumption for a better use and understanding of data. Follow these steps to install fluentd.

  1. Create a file named fluentd.yaml to configure your installation preferences, and add the following:

    elasticsearch: host: elasticsearch-elasticsearch-client
  2. Run the following command to install fluentd:

    helm upgrade --install fluentd incubator/fluentd-elasticsearch -f fluentd.yaml

Installing Kibana

Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. You can use it to view and search your logs. Follow these steps to install Kibana.

  1. Create a file named kibana.yaml to configure your installation preferences, and add the following:

    env: ELASTICSEARCH_URL: http://elasticsearch-elasticsearch-client:9200
  2. Run the following command to install Kibana:

    helm upgrade --install kibana stable/kibana -f kibana.yaml

Accessing Kibana

Run the following command to access Kibana:

export POD_NAME=$(kubectl get pods --namespace default -l "app=kibana,release=kibana" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:5601 to access Kibana" kubectl port-forward $POD_NAME 5601

In Kibana you can run the following query to test your setup:

kubernetes.container_name:engine