Subscription value meters
Qlik Cloud capacity-based subscriptions have the volume of Data for Analysis or Data Moved as the primary value meter. For Qlik Cloud Analytics Standard, the value meter is the number of Full Users. Qlik Talend Cloud Premium and Qlik Talend Cloud Enterprise additionally have the metrics third party data transformations ($/GB), Job execution, and Job duration. Qlik Anonymous Access instead has the metrics Anonymous Capacity and Anonymous Concurrent Sessions.
The table shows the primary value meters for each of the subscription options.
Subscription option | Value meter |
---|---|
Qlik Cloud Analytics Standard | Full Users |
Qlik Cloud Analytics Premium | Data for Analysis |
Qlik Cloud Enterprise | Data for Analysis and Data Moved |
Qlik Talend Cloud Starter | Data Moved |
Qlik Talend Cloud Standard | Data Moved |
Qlik Talend Cloud Premium | Data Moved, Third party data transformations ($/GB), Job execution, and Job duration |
Qlik Talend Cloud Enterprise |
Data Moved, Third party data transformations ($/GB), Job execution, and Job duration |
Qlik Anonymous Access | Anonymous Capacity and Anonymous Concurrent Sessions |
Administrators can monitor the consumption of Data for Analysis, Data Moved, Full Users, and other resources in the Administration activity center and the Data Capacity Reporting App. For more information, see Monitoring resource consumption and Monitoring usage with detailed consumption reports. The service account owner of the Qlik Cloud subscription can monitor consumption and view subscription details in My Qlik portal.
For more information about the license metrics, see the Qlik Cloud® Subscriptions product description.
Data for Analysis
Qlik Cloud Analytics is measured on Data for Analysis volume. Your peak monthly usage is measured against your purchased capacity. The Data for Analysis metric is the total of all data loaded into and residing in Qlik Cloud, as specified below.
The following data is included in the metric:
-
Data loaded into Qlik Cloud from external sources. For reloads, new incremental data increases the data count. If the reload has less data, the data count decreases.
-
Data files uploaded to or created in Qlik Cloud. The file size is counted. If you copy data files within Qlik Cloud, the new data files are included in the count.
The data analyzed metric is calculated as:
-
The volume of external data ingested into Qlik Cloud via a Qlik Sense app.
-
The resulting QVD file size from external data being loaded into Qlik Cloud via Qlik Data Gateway - Data Movement.
-
The file size of data files uploaded to Qlik Cloud.
-
Static byte size of the app.
Data loaded into multiple tenants is counted multiple times, whereas data that is loaded once and used in multiple apps is counted only once.
The following are not counted in the metric:
-
Apps and data that are loaded into a personal space using On-demand app generation (ODAG).
-
Data that is loaded through Direct Query.
Subscribing to Data for Analysis capacity
You subscribe to data packs based on your requirements for Data for Analysis. In addition to the data packs, each Full User entitlement includes a certain capacity of Data for Analysis. This data is restricted to the user’s personal space and is not counted toward the total Data for Analysis. However, if the user moves the data to a shared space to collaborate with other users, it will be counted.
Note that Qlik Cloud Analytics Standard has a fixed data capacity. For this edition, you subscribe based on the number of Full Users.
Moving data into Qlik Cloud
Your options for moving data include:
-
Direct data connections from Qlik Sense
-
Qlik Data Gateway - Direct Access
-
Data movement to Qlik Cloud with Qlik Talend Data Integration
You can move data into Qlik Cloud from any source with the Premium and Enterprise editions of Qlik Cloud Analytics. With Qlik Cloud Analytics Standard, you can move data from any source except SAP, mainframe, and legacy sources.
Calculating and managing Data for Analysis volume
Understanding how Data for Analysis is calculated can help you make the most of your Data for Analysis capacity in Qlik Cloud. In this section, we'll take a closer look at how monthly peak, data loading, app reloads, and data creation are measured, as well as look at data management best practices.
Monthly peak
The daily peak represents the total Data for Analysis volume for a given day. The maximum daily peak for the month is your monthly peak, which is measured against your purchased Data for Analysis capacity.
The daily peak is calculated as the sum of all file sizes for formats like QVD, CSV, or text files plus the maximum bytes ingested from external sources for app reloads on that day.
Let's look at the following example:
Throughout the month, daily data usage varies. On day 1, it reaches a peak of 3 GB, while on day 2, it’s 6 GB, and so on until the end of the month. The highest daily usage, recorded on day 2, was 6 GB. This gives us a monthly peak of 6 GB. On days where there is no app reload, as seen on day 3 in our example, the value from the previous day is carried forward.
Loading data into Qlik Cloud
Data loaded into Qlik Cloud from external sources is counted towards the daily peak. When you load data into a tenant, it is counted once and can be analyzed and used multiple times. Data loaded into multiple tenants is counted multiple times.
Data contributing to the daily peak is measured as follows:
-
File-based data loaded via a Qlik Sense app is measured by its file size.
-
App reloads using queries or connectors are counted as the maximum bytes ingested from the data source. When there are multiple reloads happening on the same day, the largest app size is the one that will be counted towards the daily peak. For example, if an app is reloaded during a day with 0.75 GB, 1.25 GB, and 1 GB, respectively, the value used for that day would be 1.25 GB.
As long as an app exists in the Qlik Cloud tenant, the maximum bytes ingested are evaluated for the app.
-
Data loaded into Qlik Cloud via Qlik Data Gateway - Data Movement is measured by the size of the resulting QVD file.
-
Apps that are uploaded or loaded via file import, either in the Analytics activity center or using qlik-cli, are measured by the app's static byte size.
-
Uploaded QVD files are measured by their file size.
In the following situations, data is not included in the calculation of the daily peak:
-
Data loaded into a user's personal space is not counted, as long as it is restricted to that space. If the user moves the data to a shared space to collaborate with others, it will be counted.
-
If a reload fails, the bytes ingested are not counted. However, any resulting QVD files are counted.
-
When you load an app with data that already resides in Qlik Cloud, the data load is not counted. For example, copied or binary loaded apps (loading data from another Qlik Sense app) do not impact the daily peak, provided they’re not reloaded from an external source.
Measuring bytes ingested for app reloads
The following applies when you reload a Qlik Sense app from an external source:
-
You can reload an app multiple times from the same source dataset without affecting the daily peak, as long as the data volume remains unchanged.
-
If the source dataset increases in size, the daily peak is affected. Each additional GB of data added to the dataset contributes an equivalent amount to the data ingested during the reload.
-
Conversely, if the source dataset decreases in size, this reduction is also reflected in the daily peak. For example, if the dataset size is reduced by 0.25 GB, the reload size decreases by the same amount. However, if a 1 GB reload occurred earlier in the day, the daily peak for that day would be 1 GB. The reduction would only be reflected in the daily peak for the next day.
-
Changes in the content of the source dataset, without altering its size, do not impact the daily peak. The daily peak is only affected by the data volume.
-
If you query the same dataset multiple times within a single load script, all those queries are counted separately, and their data volumes are summed up. For example, if you have a load script that includes three queries of 1 GB each from the same dataset, all three of those queries are counted individually. So, the total data counted towards your daily peak is 3 GB.
-
Loading an app and subsequently dropping the table does not reduce the daily peak, as the daily peak is based on the maximum app reload size for the day.
-
If you load an app and then delete it on the same day, it will still contribute to the daily peak for that day. However, it reduces the daily peak for the following day when the app no longer exists.
Measuring data loaded into QVD files with Qlik Talend Data Integration
The following applies when you load data into a QVD file from an external source using Qlik Data Gateway - Data Movement:
-
You can upload, import, or generate a dataset multiple times without affecting the daily peak, as long as the data volume remains unchanged.
-
If the source dataset increases in size, the daily peak is affected. Each additional GB of data added to the dataset contributes an equivalent amount to the size of the resulting QVD file.
-
Conversely, if the source dataset decreases in size, this reduction is also reflected in the daily peak. For example, if the dataset size is reduced by 0.25 GB, the size of the resulting QVD file decreases by the same amount.
-
Changes in the content of the source dataset, without altering its size, do not impact the daily peak. The daily peak is only affected by the data volume.
Loading apps from external and internal sources
It is important to understand how data loaded into apps affects the daily peak, depending on the data source. Let's consider the following scenarios where data is loaded from different sources.
-
An app is loaded from an external source
When you load data from an external source into an app, it counts as bytes ingested. For example, if you load 10 GB, the contribution to the daily peak is 10 GB.
-
An app is loaded from a QVD in Qlik Cloud
Loading data into an app from a QVD file residing in Qlik Cloud does not count toward the daily peak. If 10 GB of data is loaded into an app from the QVD, no data is counted because there is no ingestion of external data. The contribution to the daily peak is 0 GB.
-
A new QVD file is generated from a QVD in Qlik Cloud
Data loaded into a QVD generator app from a Qlik Cloud-based QVD is not counted toward the daily peak. However, the resulting QVD file generated from the app does count. For example, if a 10 GB QVD file is transformed into a new 5 GB QVD, the contribution to the daily peak is the sum of the two files, which is 15 GB. As there is no external data ingestion, the load of the QVD generator app (a dedicated app that creates a data model and generates a QVD) is not counted.
-
An app is loaded from both external and internal sources
If an app loads 10 GB from an external source and 5 GB from a QVD within Qlik Cloud, the total contribution to the daily peak from the app is 10 GB, as only the data loaded from the external source is counted.
Creating data in Qlik Cloud
When you create new data in Qlik Cloud, whether by copying data files or deriving it through combining and processing existing raw data, it counts toward the daily peak. Data is measured as the total size of the files generated during the data creation process. The created data is only counted once, regardless of how many apps use it.
Consider these examples of data creation:
-
Creating a 1 GB QVD file using the STORE statement adds 1 GB to the daily peak.
-
Copying a 1 GB QVD file adds 1 GB to the daily peak, as both copies contribute to the total.
-
Creating a 0.5 GB QVD file through transformation adds 0.5 GB to the daily peak. Only the resulting QVD file is counted; the QVD generator app isn't counted as it loads data that is already in Qlik Cloud.
Best practices for managing data
Proper data management improves performance and ensures that you get the most out of your Data for Analysis capacity. This section will show you how to handle data efficiently in Qlik Cloud.
-
Create QVD files for data reuse
When dealing with data that will be used in multiple Qlik Sense apps, consider creating QVD files. QVD files allow you to load data once and reuse it in multiple apps without adding to the daily peak. This can significantly reduce data ingestion and storage costs.
For example, if you load 10 GB of external data and create a 5 GB QVD file, you have a total of 15 GB of data contributing to the daily peak. Loading the same data directly into two apps results in 20 GB.
In general, creating QVD files for data reuse with Qlik Data Gateway - Data Movement, is more efficient than reloading data directly via apps.
-
Use efficient data loading methods
Take advantage of SQL pushdown transformations to optimize data loading. This technique involves pushing data transformations and operations directly to the data source. By filtering and transforming data at the source, you reduce the volume of data transferred and improve loading efficiency.
For example, in this pushdown query, the WHERE clause is processed directly on the source data. Only the subset of data that meets specific criteria is transferred, reducing the amount of data loaded into memory.
Select * from my-external-database-table where my_column = 10
Note that in the case of loading QVD files, the WHERE clause is processed after the file is read from the source, and the entire file is counted.
-
Use On-demand apps for large datasets
On-demand apps (ODAG) are useful when dealing with large datasets. ODAG allows you to load aggregated data for the parent app and fetch more detailed data only when necessary. Users get aggregate views of big data stores allowing them to identify and load relevant subsets of the data for detailed analysis. For more information, see On-demand apps.
-
Handle large datasets with Direct Query and Dynamic Views
For large datasets, consider using Direct Query and Dynamic Views. With these features, you can query and view relevant subsets of large datasets without the need to import or load all the data into memory. While there are some limitations compared to in-memory apps, it's an efficient way to work with substantial datasets. For more information, see Direct Query apps and Managing data with dynamic views.
-
Regularly clean up unused apps and data files
To optimize resource usage and improve overall site performance, regularly identify and delete unused apps and data files. The following steps can help with the cleanup:
-
Identify unused apps and data files in Catalog by sorting and checking the Last updated, Viewed by, and Used in columns. This lets you see if the item has been opened within the last 28 days and in how many apps a data file is used. For more information, see Viewer and item usage metrics.
-
Impact analysis and Lineage help you find out where a data file is used and which data files that are used in a specific app. For more information, see Analyzing impact analysis for apps, scripts, and datasets and Analyzing lineage for apps, scripts, and datasets.
-
You can delete apps and data files from activity centers. Administrators can also delete apps from the Administration activity center.
-
Data Moved
The Data Moved metric is the sum of all data moved to a target. You can move data to any type of target. The type of sources you can move data from depends on your subscription. There is no limit on the number of targets or sources.
Data Moved is measured from the beginning of the month. It is counted as it is landed on the target. This means that the same data replicated to two different targets is counted twice. The initial full load of new tables or files is free and is not counted.
The Data Moved volume is calculated as the number of rows in the dataset multiplied by the estimated row size. The estimated row size is calculated as the total size of all the columns in a row, based on the data type of each column. For details on how the internal representation of the data types maps to your target schema, see Connecting to cloud data platforms in your data projects and go to the Data types section in the topic for your cloud data platform.
The row count used in the calculation of the Data Moved volume might differ slightly from the expected value. These small differences are expected and caused by technical artifacts that cannot be controlled by Qlik.
For example, when loading a big table, the database might send the same row twice (phantom reads) or count a row both as reload and as a change row. Differences might also arise in change counts when a change causes a trigger execution, which makes additional unexpected changes, and the change counts are read from a transaction log or a change source.
The Data Moved calculation is based on the landing dataset as it appears in Qlik Cloud. Changes to this dataset will be taken into account, such as adding new columns. If you are trying to reproduce the Data Moved volume calculations, make sure that you are using the right data types as they appear in Qlik Cloud and not the source, as this affects the column size in the calculation. For example, using varchar(20) instead of varchar(10) doubles the column contribution to the estimated row size.
The following table lists the size of each data type. The min() function used for bytes, string, and wstring returns the smallest of the two values, either length/2 or 200.
Data type | Size (in bytes) |
---|---|
Unspecified | 1 |
BOOLEAN | 1 |
BYTES(length) | min(length/2, 200) |
DATE | 4 |
TIME | 4 |
DATETIME | 8 |
INT1 | 1 |
INT2 | 2 |
INT4 | 4 |
INT8 | 8 |
REAL4 | 2 |
REAL8 | 4 |
UINT1 | 1 |
UINT2 | 2 |
UINT4 | 4 |
UINT8 | 8 |
NUMERIC | 2 |
STRING(length) | min(length/2, 200) |
WSTRING(length) | min(length/2, 200) |
BLOB | 200 |
CLOB | 200 |
NCLOB | 200 |
Example: Calculating Data Moved volume
In this example we have a dataset for product categories. The dataset has 100 rows and the following columns:
Column name | Data type |
---|---|
CategoryID | INT4 |
CategoryName | WSTRING(15) |
Description | NCLOB |
Picture | BLOB |
There is a fixed size for each data type:
Data type | Size (in bytes) |
---|---|
INT4 | 4 |
WSTRING(15) | min(15/2, 200) = 7.5 |
NCLOB | 200 |
BLOB | 200 |
We can now calculate the estimated row size as the sum of the column sizes: 4 + 7.5 + 200 + 200 = 411.5 bytes. Multiplied by 100 rows, this gives us a total data volume of 41,150 bytes.
Third party data transformations
This metric applies to all datasets that are registered using the data task for Registered data. Third party transformations are measured in $/GB from the beginning of the month. GB used for third party data transformations is calculated with the same logic as data moved, that is, the number of rows in the dataset multiplied by the estimated row size. For more information about how to calculate the estimated row size, see Data Moved.
When processing data through the Registered data task, full or initial load processing is counted toward the capacity. Subsequent executions will detect changed rows, and will only count changed records toward capacity.
Job execution and Job duration
Job execution and Job duration are the main metrics for Talend Data Fabric capabilities that are included in Qlik Talend Cloud subscriptions. A job is identified as a distinct Artifact ID as reported in the Talend Management Console.
-
Job executions are the total number of job runs executed and concluded in a given month. Always-On Jobs are counted once in each month the job was running.
-
Job duration means the total duration in minutes, measured from the time a job starts until it stops. For batch jobs, the duration is accounted for in the month the job successfully ended. For Always-On Jobs, duration is measured from the start execution time in each month the job has been running and calculated at 10% of the total minutes.
Full Users
Users with Full User entitlement can view, edit, and create content in apps, export charts and apps, work with data integration, automations, machine learning, and perform various other tasks, provided their user permissions and space permissions allow it. For more information, see Managing user entitlements.
Basic Users
The free Basic User entitlement is available with Qlik Cloud Analytics Premium and Qlik Cloud Enterprise subscriptions. It is intended for limited, read-only access. Basic Users cannot create or edit apps and other assets, or work with Data Integration. Granting additional permissions automatically promotes them to Full Users. For more information, see Managing user entitlements.
Anonymous Capacity
Anonymous Capacity is only applicable to Qlik Anonymous Access subscriptions. This value meter refers to the total RAM usage that all apps loaded into memory can consume at a given time. This includes tenant user sessions (sessions opened by Full Users and administrators within the tenant) and anonymous user sessions (sessions opened by users who are not logged into the Qlik Cloud tenant).
The Anonymous Capacity of a tenant is defined by the purchased amount purchased within the subscription.
Anonymous Concurrent Sessions
Anonymous Concurrent Sessions is only applicable to Qlik Anonymous Access subscriptions. This value meter defines the maximum number of app sessions that can be run concurrently by anonymous users (users who are not logged into the Qlik Cloud tenant).
The Anonymous Concurrent Sessions of a tenant is defined by the purchased amount purchased within the subscription. You can purchase up to 1000 sessions. For more information, see Qlik Anonymous Access specifications and capacity.