Skip to main content Skip to complementary content

Apache Spark

Connections to an Apache Spark database are made by selecting Apache Spark from the list of drivers in the list of connectors in the QlikView ODBC Connection dialog or the Qlik Sense Add data or Data load editor dialogs.

The Apache Spark Connector is used for direct SQL and HiveQL access to Apache Hadoop/Spark distributions. The connector transforms an SQL query into the equivalent form in HiveQL and passes the query through to the database for processing.

Information noteIndustry-accepted best practices must be followed when using or allowing access through the ODBC Connector. Administrators must follow The Principle of Least Privilege when setting up source database privileges and permissions.

Supported offerings

  • Qlik Sense Desktop/Qlik Sense Client-Managed Mobile
  • Qlik Sense Enterprise SaaS
  • QlikView (Requires separate installation)

Supported Apache Spark versions

The followings database versions are supported:

  • 1.6

  • 2.1 through 2.4

  • 3.0

Supported Apache Spark data types

The following Apache SPARK data types are supported by the Apache Spark Connector.

  • BigInt
  • Binary
  • Boolean
  • Date
  • Decimal
  • Double
  • Float
  • Integer
  • SmallInt
  • String
  • Timestamp
  • TinyInt
  • Varchar

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!