Skip to main content Skip to complementary content

Monitoring deployed model operations

You can view details about the usage of your ML deployment for predictions, such as how many rows have been predicted or how each predictions was triggered. To access these details, open the Data drift monitoring tab in your ML deployment.

Information noteModel operations analysis is only available in English.

Model operations analysis in AutoML

Embedded analysis showing details for monitoring model operations. The sheet shows visualizations displaying details such as the number of predictions, the number of prediction requests, and the trigger for each prediction event

View a detailed log showing each prediction event, along with key details such as who when it happened, who initiated it, whether it finished successfully or not, and how it was triggered.

Navigating embedded analytics

Use the interactive interface to analyze the deployed model with embedded analytics.

Switching between sheets

The Sheets panel lets you switch between the sheets in the analysis. Each sheet has a specific focus. The panel can be expanded and collapsed as needed.

The Operations sheet contains all information about model operations and usage. Switching to the Data drift monitoring sheet allows you to monitor data drift for each feature used to train your model. For more information, see Monitoring data drift in deployed models.

Making selections

Use selections to refine the data. You can select features and their specific values or ranges, and filter for specific dates and importance ranges. In some cases, you might need to make one or more selections for visualizations to be displayed. Click data values in visualizations to make selections.

You can work with selections by:

  • Select values by clicking content, defining ranges, and drawing.

  • Search within charts to select values.

  • Click a selected field in the toolbar at the top of the embedded analysis. This allows you to search in existing selections, lock or unlock them, and further modify them.

  • In the toolbar at the top of the embedded analysis, click Remove to remove a selection. Clear all selections by clicking the Clear selections icon.

  • Step forward and backward in your selections by clicking Step backward in selections and Step forward in selections.

Customizing tables

The table visualizations allow you to customize their look and feel, as well as the columns displayed in them. Tables can be customized with the following options:

  • Adjust column width by clicking and dragging the outside border of the column

  • Click a column header to:

    • Adjust the column's sorting

    • Search for values in the column

    • Apply selections

Launching a model operations analysis

  1. Open an ML deployment.

  2. From the left panel, select Data drift monitoring.

  3. An embedded analysis is generated. Switch to the Operations sheet.

Availability of the analysis

To launch a model operations analysis, you need to first launch a data drift monitoring analysis. Depending on how long you have had your model operations analysis open, you might need to create a new session if the current session expires. The timeout is 15 minutes.

To refresh the analysis, reload your browser window and navigate back to Data drift monitoring. Switch to the Operations sheet.

Available analysis options

In your model operations analysis, you can access the following details about the deployment:

  • The number of prediction requests (Prediction requests object).

  • The number of individual predictions generated (Predictions object). For each row in a dataset to predict, one prediction is generated. For real-time predictions, each individual prediction is tracked.

  • Breakdowns of the number of prediction requests and predictions by trigger.

  • A detailed timeline for all prediction events, showing the number of prediction events, as well as the number of successful and failed predictions.

  • A breakdown of the hour at which each prediction request and prediction completion occurred.

  • A detailed log showing each prediction event, along with key details such as when it happened, who initiated it, whether it finished successfully or not, and how it was triggered.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!