Skip to main content Skip to complementary content

Creating real-time predictions

Use your ML deployment to predict future outcomes on new data. You create real-time predictions using the real-time prediction endpoint in the Machine Learning API.

Predictions can be made in real time, such as real-time decisions about customer discounts at checkout. When predictions are generated, you can load the predictive insights into a Qlik Sense app. This lets you visualize and interact with the data and create what-if scenarios.

Information note

The real-time predictions API is deprecated and replaced by the real-time prediction endpoint in the Machine Learning API. The functionality itself is not being deprecated. For future real-time predictions, use the real-time prediction endpoint in the Machine Learning API.

Creating real-time predictions with the API

The Real-time predictions pane in the ML deployment interface gives you access to the real-time prediction endpoint in the Machine Learning API. If the default model in the ML deployment is activated for making predictions, this pane is visible.

The real-time prediction endpoint s a two-way communication between AutoML and other capabilities in Qlik Cloud, including Qlik Sense and Automations, as well as external applications. You can use the endpoint to programmatically make predictions by passing data to a model and retrieve the prediction results in real time.

Real-time predictions pane

Real time predictions pane.
  1. Open the Real-time predictions pane in an ML deployment.

  2. Use the copy buttons to copy the applicable URL or JSON to your clipboard (for information about selecting which alias to use, see Working with model aliases in real-time predictions).

  3. Incorporate calls to the Machine Learning API into your own applications, or manually call the API using your desired tool.

    For real-time endpoint specifications for the Machine Learning API, see Generate predictions in a synchronous request/response.

For more general information about the Machine Learning API, see Machine Learning API.

Requirements for real-time predictions

Working with model aliases in real-time predictions

You can add multiple models to an ML deployment. A system of aliases is used in ML deployments to allow for dynamic swapping of models for use in predictions. For more information, see Using multiple models in your ML deployment.

When you copy your URL or JSON, the following options are available:

  • Default prediction: Use this option to generate predictions from the default alias in the ML deployment.

  • Alias prediction: Use this option when you want to generate predictions from any additional aliases you have added to the ML deployment. Select an alias using the drop down menu, and then copy the URL or JSON.

Viewing data drift and prediction event details

After you run a real-time prediction, open the ML deployment and explore the Operations monitoring and Data drift monitoring panes. In these views, you can evaluate:

  • The level of data drift for each feature involved in the prediction. The comparison is performed between the data you send to the AutoML real-time prediction API and the training dataset.

  • Details about the prediction event, such as whether it succeeded or failed, and how many predictions it generated.

For more information, see:

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!