Skip to main content Skip to complementary content

Reviewing the time series models

After the first version of the model training is finished, analyze the resulting model metrics and recommended models.

When you run the experiment version, you are taken to the Models tab, where you can start analyzing the resulting model metrics. You can access Schema view and Data view by returning to the Data tab. More granular analysis can be performed in the Compare and Analyze tabs.

Information noteQlik Predict is continually improving its model training processes. Therefore, you might notice that the model metrics and other details shown in the images on this page are not identical to yours when you complete these exercises.

Models tab in the time series experiment.

'Models' tab in the experiment, showing information about trained time series models

Analyzing the Model metrics table

You are now on the Models tab. In the Model metrics section, recommended models are highlighted based on common quality requirements. The best model Trophy has been selected automatically for analysis.

Three recommendations are provided from the models trained in the experiment. A single model can be represented in more than one recommendation. The recommendations are:

  • TrophyBest model: The model best balances top-performing accuracy metrics and prediction speed.

  • Target Most accurate: The model scores the highest in balanced and raw accuracy metrics.

  • Lightning boltFastest model: The model has the fastest prediction speed, in addition to strong accuracy-related metrics.

It is important to choose the model that is best suited to your use case. In most cases, the Best model is the most favorable option. However, your predictive use case might require particular prediction speeds or accuracy metrics.

For an in-depth overview of how the top model types are determined, see Selecting the best model for you.

Model metrics table presenting scores and recommended models.

'Model metrics table presenting scores and recommended models

Analyzing the Model training summary

Shift your focus to the Model training summary on the right side of the interface. For time series experiments, the model training summary provides details about the time series configuration you have created. In particular, you can see that the maximum forecast window has now been confirmed and is no longer an estimate.

Model training summary showing details about the configured time series problem.

Time series configuration section in the model training summary

Analyzing the Prediction error in forecast window chart

The Prediction error in forecast window chart is an autogenerated visualization that allows you to gain quick insight into the rate at which the selected model is predicting incorrect values. You can view the tenth, fiftieth, and ninetieth percentile error.

Prediction error in forecast window chart for the selected model.

Time series prediction errors chart for the selected model

Compare and Analyze tabs

In the Compare and Analyze tabs, you can use embedded analytics to further evaluate analyze model metrics in greater detail.

The Analyze tab allows you to evaluate the model in detail. You can view the predictions for each group individually. This tab displays predicted versus actual values for the test set, per group.

Analyze tab in the time series experiment, showing actual versus forecasted values for a single value in group 1.

'Analyze' tab in the experiment, showing actual versus forecasted values for a single value in group 1.

Next steps

In this tutorial, you will proceed to deploying the TrophyBest model identified from v1 of the training. Move to the next section about deploying your time series model.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!