Skip to main content Skip to complementary content

Navigating the ML deployment interface

When you open your ML deployment, you can perform management and monitoring activites, and use it to create predictions on datasets.

Open an ML deployment from the catalog. There are navigation options for the following:

  • Model approval

  • Deployment information

  • Dataset predictions

  • Real-time predictions

  • Data drift and operations monitoring

Model approval status

Before the ML deployment can generate predictions, its source model needs to be activated. This process is known as model approval, and helps to control the number of actively used deployed models in the subscription.

If you have the correct permissions, you can activate and deactivate the source model as needed. Otherwise, contact a tenant administrator or other user with sufficient permissions.

See:

Model approval status

Model approval status is shown at the top of the page when you open an ML deployment. In this case, the model in the ML deployment is 'Enabled', meaning it is approved to create predictions.

Deployment overview

The Deployment overview shows the features used in the model training and details for the deployment.

Overview of the ML deployment

Model overview pane.

Dataset predictions

Dataset predictions displays an overview of the prediction configurations of the ML deployment. You can have several prediction configurations for an ML deployment.

You can use the Actions menu to run, edit, or delete predictions. You can also edit and delete prediction schedules from this menu.

If no schedule is currently configured for your prediction, you can also use the Actions menu to create a new prediction schedule.

Dataset predictions with an overview and Actions menu expanded

Dataset predictions pane.

If you select Edit prediction configuration, the Prediction configuration pane is opened.

Dataset predictions with side pane for prediction configuration

Prediction configuration menu and dataset schemas when creating predictions.

Real-time predictions

The Real-time predictions pane gives you access to the real-time prediction endpoint in the Machine Learning API. If the model in the ML deployment is activated for making predictions, this pane is visible.

For information about creating real-time predictions, see Creating real-time predictions.

Information note

The real-time predictions API is deprecated and replaced by the real-time prediction endpoint in the Machine Learning API. The functionality itself is not being deprecated. For future real-time predictions, use the real-time prediction endpoint in the Machine Learning API.

Model monitoring

You can monitor data drift and operations for the ML deployment. To perform model monitoring, open the Data drift monitoring pane.

With data drift monitoring, you can assess changes in the distribution of features in the source model. When significant drift is observed, it is recommended that you retrain or re-configure your model to account for the latest data, which may indicate new patterns in data trends.

For more information, see Monitoring data drift in deployed models.

With operations monitoring, you can view details about how the ML deployment is being used, such as how many prediction events succeed or fail, and how prediction events are typically triggered.

For more information, see Monitoring deployed model operations.

Data drift monitoring pane in AutoML

Embedded analysis showing feature drift calculations for a deployed model. The sheet includes visualizations to display information such as feature drift over time, value distributions, and a comparison of feature drift and importance within the same chart

View ML experiment

Click View ML experiment in the bottom left corner of the page to open the ML experiment from which the ML deployment was created.

Overview of the ML deployment

Button at the bottom of the ML deployment interface to allow the user to return to the source ML experiment

Learn more

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!