Skip to main content Skip to complementary content
Close announcements banner

Working with Apache Parquet files

Apache Parquet is a columnar storage format, highly efficient for storing and querying large datasets. In Qlik Sense, you can read data from Parquet files, and store tables as Parquet files.

Parquet allows for efficient querying of specific columns in a table rather than reading the entire table. This makes it well-suited for use with big data processing. Also, Parquet supports efficient compression and encoding of data. This can further reduce storage space and improve query performance.

Information noteAll existing apps created in a Qlik Sense version before August 2023 must be manually updated to enable Parquet support. This is required both for deployments that are upgraded to August 2023, and when importing existing apps to a new deployment. For more information about updating the apps, see Enable Parquet file support for existing apps in Qlik Sense.

Creating Parquet files

You can create Parquet files using the Store command in the script. State in the script that a previously-read table, or part thereof, is to be exported to an explicitly-named file at a location of your choice.

For more information, see Store.

Reading data from Parquet files

You can read data from a Parquet file just like any other data file supported by Qlik Sense. This includes Data manager, Data load editor, or when you add data to a new app.

For more information, see Loading data from files.

You can also load data from a Parquet file in the data load script with the LOAD command. For example:

LOAD * from xyz.parquet (parquet);

For more information, see Load.

Limitations

  • Nested field types are not supported by Qlik Sense. The fields are loaded but the content will be null.

  • Parquet files that contain an int96 timestamp field may not be loaded correctly.

    Int96 is a deprecated data type that contains a timestamp without timezone information. An attempt will be made to read the field as UTC, but as there are different vendor implementations there is no guarantee for success.

    Verify the loaded data and adjust it to the correct timezone with an offset if required.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!