AI-900 domain

Describe fundamental principles of machine learning on Azure

Use this page to practise AI-900 Describe fundamental principles of machine learning on Azure practice questions. The goal is not to memorise dumps, but to understand the concept, review the explanation and improve your exam readiness.

100 questions

Focused practice

Start a Describe fundamental principles of machine learning on Azure session

All sessions draw only from this domain. Pick a length or try interactive practice with inline explanations.

Start 20-question practice session →

What the exam tests

What to know about Describe fundamental principles of machine learning on Azure

Cloud concepts questions usually test the service model (IaaS/PaaS/SaaS) and deployment model (public/private/hybrid/community) appropriate for a given scenario.

IaaS, PaaS and SaaS responsibilities and examples.

Public, private, hybrid and community cloud deployment models.

On-premises vs cloud trade-offs: cost, control, scalability.

How cloud connectivity options (VPN, Direct Connect, ExpressRoute) work.

Question index

All Describe fundamental principles of machine learning on Azure questions (100)

Click any question to see the full explanation, or start a practice session above.

1

A data scientist wants to train a machine learning model to predict the exact market price of a house based on features such as square footage, number of bedrooms, and location. Which type of machine learning task should be used?

2

A data scientist has trained a binary classification model to predict whether an email is spam (positive) or not spam (negative). On a test set, the model correctly identifies 90 out of 100 actual spam emails and 80 out of 100 actual non-spam emails. Which metric shows the proportion of actual spam emails that the model correctly predicted?

3

A retail company wants to predict which customers are likely to stop using their service. They have a dataset with many customer attributes including age, income, purchase history, website activity, and support interactions. They suspect some features are redundant. Which technique should they use to reduce the number of features while preserving as much information as possible?

4

A retail company wants to automatically group its customers into distinct segments based on their purchasing patterns, without having pre-defined categories. The goal is to discover natural groupings in the customer data to tailor marketing campaigns. Which type of machine learning task should the company use?

5

A hospital has a dataset with historical patient records, each labeled as either 'readmitted within 30 days' or 'not readmitted'. The hospital wants to train a model to predict which current patients are likely to be readmitted. Which type of machine learning task is this?

6

A data scientist trains a machine learning model to predict housing prices. On the training data, the model achieves an R-squared value of 0.99, but on a separate validation dataset it achieves an R-squared of only 0.65. What is the most likely issue with this model?

7

A data scientist trains a machine learning model on a dataset of housing prices. The model achieves 98% accuracy on the training data but only 72% accuracy on a separate test set. What is the most likely problem with this model?

8

A retail company wants to predict the exact number of units of a product that will be sold next month. They have historical sales data and information about promotions and holidays. The target variable is the number of units sold, which is a continuous value. Which type of machine learning task should they perform?

9

A data scientist trains a regression model on a dataset with 100 features and 10,000 samples. The model achieves a low training error but a much higher error on a held-out test set. Which approach is most likely to improve the model's generalization performance?

10

A data scientist is training a binary classification model to detect fraudulent transactions. The dataset contains only 1% fraudulent transactions. The model achieves 99% accuracy on the test set, but when deployed, it fails to detect most actual fraud cases. Which metric would best reveal this issue?

11

A data scientist is building a classification model to predict customer churn. The dataset has only 5% churn cases. The model achieves 95% accuracy on the test set, but upon investigation, the data scientist finds the model predicts 'not churn' for nearly every customer. Which metric should the data scientist primarily use to evaluate the model's performance on this imbalanced dataset?

12

A bike-sharing company wants to predict the number of rentals per hour. Their model's predictions are usually close but occasionally have large errors due to unexpected events like sudden rain. They want a metric that heavily penalizes these large errors to ensure the model is not overly confident. Which evaluation metric should they primarily use?

13

A data scientist trains a regression model to predict house prices. The model has a mean absolute error (MAE) of $5,000 on the test set. Which statement best interprets this metric?

14

A company builds a machine learning model to predict whether a customer will purchase a product. They use a training dataset with 50% purchasers and 50% non-purchasers. The model achieves 90% accuracy on the test set. However, when deployed, the model performs poorly because the actual customer base has only 5% purchasers. What is the most likely cause of this poor performance?

15

A data scientist trains a classification model to distinguish between images of cats and dogs. The model achieves 99% accuracy on the training set but only 75% accuracy on a validation set. Which concept best describes this situation?

16

A data scientist is building a model to predict the exact temperature in degrees Celsius based on humidity and atmospheric pressure. The model will output a single numeric value for each input. Which type of machine learning task is this?

17

A media company wants to automatically organize a large collection of news articles into several topic-based categories (e.g., politics, sports, technology) without using any predefined labels. They plan to use Azure Machine Learning. Which type of machine learning task should they use?

18

A retail company wants to segment its customers into different groups based on purchasing behavior, without using predefined categories. Which type of machine learning task should they use?

19

A data scientist trains a deep neural network on a small dataset. The model achieves 100% accuracy on the training data but only 60% accuracy on a validation set. Which technique is most appropriate to address this issue?

20

An online retailer wants to build a recommendation system that learns from user interactions. The system suggests a product, and if the user clicks it, it receives a positive reward; if ignored, a negative reward. Over time, the system learns to make better suggestions. Which type of machine learning best describes this approach?

21

A manufacturing team wants to predict product defects based on sensor readings from the production line. They have 10,000 historical samples, each labeled as 'defective' or 'non-defective'. Which type of machine learning should they use in Azure Machine Learning?

22

A data scientist trains a classification model to predict whether an email is 'phishing' or 'legitimate'. The model achieves 99% accuracy on the training data but only 68% accuracy on the test data. Which action is most likely to help improve the model's generalization performance?

23

A data scientist is training a regression model to predict house prices. The model performs near perfectly on the training data but poorly on a held-out test set. The scientist suspects the model is memorizing the training data instead of learning general patterns. Which technique is most appropriate to directly address this issue?

24

A data scientist is building a machine learning model to predict whether a credit card transaction is fraudulent or legitimate. The dataset contains 100,000 historical transactions, each labeled as 'fraudulent' or 'legitimate'. Which type of machine learning task should the data scientist use in Azure Machine Learning?

25

A data scientist is training a classification model on a dataset with 100 features and only 500 labeled samples. The model achieves 99% accuracy on the training data but only 68% accuracy on a held-out test set, indicating overfitting. Which technique is most appropriate to directly address this problem?

26

A data scientist is building a classification model to detect fraudulent transactions. The dataset has 1,000,000 legitimate transactions and only 1,000 fraudulent ones. The model achieves 99.9% accuracy on the test set, but it fails to catch most fraudulent cases. Which metric should the data scientist prioritize to better evaluate the model's performance on this imbalanced dataset?

27

A data scientist trains a binary classification model to distinguish between images of cats and dogs. On the test set, the model achieves 98% accuracy, but a deeper inspection reveals that the test set contains 95% cats and 5% dogs, and the model predicts 'cat' for every single image. Which metric should the data scientist prioritize to get a more realistic evaluation of the model's performance on this imbalanced dataset?

28

A retail company has a dataset of customer transaction records with no predefined categories. They want to identify natural groupings of customers based on their purchasing behavior to create targeted marketing campaigns. Which type of machine learning should they use in Azure Machine Learning?

29

A data scientist trains a regression model to predict house prices using features like square footage, number of bedrooms, and location. The model achieves very high accuracy on the training data but performs poorly on a held-out test set. Which technique should the data scientist apply to reduce overfitting?

30

A data scientist has a dataset containing thousands of labeled images of cats and dogs. The data scientist wants to train a model that can automatically classify new unlabeled images as either 'cat' or 'dog'. Which type of machine learning should the data scientist use?

31

A data scientist is training a logistic regression model to predict customer churn using a small dataset with 500 records and 200 features. The model achieves 97% accuracy on the training set but only 65% on a held-out test set, indicating severe overfitting. The data scientist wants to reduce overfitting by automatically eliminating irrelevant features. Which technique should the data scientist apply?

32

A data scientist uses Azure Machine Learning to train a model that predicts the electricity consumption (in kilowatt-hours) of a building based on features like building age, square footage, and number of occupants. The data scientist wants to evaluate how accurately the model's predictions match the actual consumption values. Which evaluation metric is most appropriate for this regression task?

33

A data scientist has a small dataset with only 200 labeled samples. They want to get a reliable estimate of model performance without using a separate validation set that would reduce the training data. Which technique should the data scientist use in Azure Machine Learning to obtain this reliable estimate?

34

A data scientist is training a regression model to predict house prices in Azure Machine Learning. The model uses features like square footage, number of bedrooms, and location (zip code). The data scientist notices that the model has a very low error on the training data but a high error on the test data. Which technique should the data scientist apply during model training to reduce overfitting by penalizing large coefficients?

35

A robotics company is training a drone to fly autonomously through an obstacle course. The drone receives positive rewards for staying on course and avoiding obstacles, and negative rewards for collisions. The system learns by trial and error to maximize its cumulative reward. Which type of machine learning is being used?

36

A data scientist is building a binary classification model to predict fraudulent credit card transactions. The dataset is highly imbalanced: only 1% of transactions are fraudulent. The cost of a false negative is very high because missing a fraudulent transaction can lead to significant financial loss. Which evaluation metric should the data scientist prioritize to minimize false negatives?

37

A data scientist is training a regression model to predict house prices using features like square footage, number of bedrooms, and location. After evaluating the model on a test set, the data scientist wants to select a metric that measures the average magnitude of prediction errors in the same units as the target variable (price). Which evaluation metric should the data scientist use?

38

A data scientist trains a regression model to predict the selling price of houses. After evaluating on a test set, the data scientist wants a metric that measures the average absolute error between predicted and actual prices, expressed in the same units (dollars) as the target variable. Which evaluation metric should the data scientist use?

39

A data scientist is training a model to classify customer reviews as positive, negative, or neutral. The dataset contains 10,000 reviews, but only 500 of them are negative. The data scientist wants to ensure the model performs well on the minority class (negative reviews). Which technique should the data scientist consider to address the class imbalance?

40

A data scientist wants to group customers into segments based on purchasing behavior without using any labeled examples. Which type of machine learning is this?

41

A data scientist is using Azure Automated Machine Learning to build a binary classification model for a highly imbalanced dataset (95% negative, 5% positive). The data scientist wants AutoML to select the best model based on a metric that is robust to class imbalance. Which primary metric should the data scientist configure in the AutoML settings?

42

A botanist uses Azure Automated Machine Learning to train a model that classifies iris flowers into three species: setosa, versicolor, and virginica. The dataset contains exactly 50 examples of each species, making it perfectly balanced. The botanist wants the primary metric to give equal importance to the classification performance of each species, regardless of their frequency. Which primary metric should the botanist select in Azure AutoML?

43

A data scientist is preparing a dataset to train a model that predicts customer churn. The dataset includes a column 'CustomerID' which is a unique identifier for each customer. Should the data scientist include the 'CustomerID' column as a feature in the training data?

44

A retail company wants to automatically group customers into segments based on their purchasing history, age, and location without using any predefined labels. The goal is to identify distinct customer profiles for targeted marketing campaigns. Which type of machine learning approach should they use?

45

A data scientist is training a model to predict whether a customer will purchase a product (Yes/No). The dataset contains 90% 'No' and 10% 'Yes'. After training, the model achieves 90% accuracy. Which evaluation metric would be more informative to assess the model's performance on the minority class?

46

A data scientist is training a regression model to predict house prices. The data scientist wants to evaluate the model using a metric that penalizes large prediction errors significantly more than small errors. Which evaluation metric should the data scientist choose?

47

A data scientist has a dataset containing images of handwritten digits (0-9) where each image is labeled with the correct digit. The goal is to train a model that can predict the digit from a new image. Which type of machine learning approach should be used?

48

A data scientist has a dataset containing information about houses: size (sq ft), number of bedrooms, location, and the actual sale price. The goal is to train a model that predicts the price of a new house based on these features. Which type of machine learning task is this?

49

A data scientist trains a machine learning model on historical sales data to predict future sales volume. The model achieves 99% accuracy on the training dataset but only 75% accuracy on a separate test dataset. What is the most likely issue with this model?

50

A data scientist trains a classification model to predict whether an email is spam or not. The model achieves 98% accuracy on the test set, but upon inspection, it classifies all emails as 'not spam' because the dataset has 95% non-spam emails. What is the most likely issue?

51

A data science team trains several machine learning models for a regression task. They observe that Model A has low training error and low test error. Model B has low training error but high test error. Model C has high training error and high test error. Which model would most likely benefit from an ensemble technique that averages the predictions of multiple models?

52

A data scientist has a dataset containing customer transaction records with features such as age, income, and purchase history, but no labels. The goal is to identify natural groupings of customers for a targeted marketing campaign. Which type of machine learning should be used?

53

A data scientist is building a machine learning model to predict the number of daily bike rentals in a city based on weather data and day of the week. The target variable is a continuous integer. Which type of machine learning task is this?

54

A data scientist trains a linear regression model to predict house prices. The model's training error is very high, and its test error is nearly as high. Which term best describes this situation?

55

A data scientist trains a model to predict house prices. The model achieves 99% accuracy on the training data but only 80% accuracy on new test data. Which technique is most likely to help improve the model's generalization?

56

A data scientist trains a model to predict customer churn. The dataset includes features like age, income, and number of support calls. The model performs well on historical data but poorly on new data from a different customer segment. Which technique is most likely to help improve generalization?

57

A robotics team is training a robot to navigate a maze. The robot receives a positive reward (+10) when it reaches the exit and a negative reward (-1) every time it bumps into a wall. The robot learns to maximize its cumulative reward over multiple trials. Which type of machine learning is being used?

58

A data scientist wants to train a model that predicts whether a customer will respond to a marketing offer (yes or no). The dataset includes features such as age, income, past purchase history, and the labeled outcome (responded or not responded) for previous customers. Which type of machine learning is this?

59

A data scientist trains a binary classification model to detect fraudulent transactions. The dataset contains only 1% fraudulent cases. The model predicts 'not fraudulent' for all transactions and achieves 99% accuracy. Which metric would best reveal the model's poor performance on fraud detection?

60

A data scientist is training a model to predict whether a patient has a rare disease (1% prevalence). The model predicts 'no disease' for all patients and achieves 99% accuracy, but fails to identify any actual cases. Which metric would best reveal this failure?

61

A data scientist trains a machine learning model to predict house prices based on features like square footage, number of bedrooms, and location. The model achieves a very low error on the training data but performs poorly on a held-out test set. Which term best describes this situation?

62

A retail company has historical data about customers, including age, purchase history, and whether they have churned (yes/no). They want to train a model that predicts if a new customer will churn. Which type of machine learning should they use?

63

A medical research team trains a model to detect a rare disease from lab results. The disease occurs in only 1% of patients. The model predicts 'no disease' for every patient and achieves 99% accuracy. Which metric best reveals that the model is failing to identify actual disease cases?

64

A manufacturer trains a model to detect defective parts on an assembly line. Only 2% of parts are defective. The model predicts 'non-defective' for all parts and achieves 98% accuracy. Which metric best reveals the model's inability to identify defective parts?

65

A data scientist trains a classification model on a dataset of 10,000 labeled emails to distinguish spam from non-spam. The model achieves 99% accuracy on the training data but only 70% accuracy on a held-out test set. Which term best describes this situation?

66

A real estate company has a dataset containing square footage, number of bedrooms, and location for 10,000 houses, along with their sale prices. They want to train a model that predicts the sale price of a new house based on these features. Which type of machine learning should they use?

67

A data scientist is evaluating a binary classification model that predicts whether a transaction is fraudulent. The test set contains 1,000 transactions: 990 legitimate and 10 fraudulent. The model's predictions are shown in the confusion matrix below. Confusion matrix: Predicted Legitimate Predicted Fraudulent Actual Legitimate 942 48 Actual Fraudulent 2 8 Which metric should the data scientist prioritize if the business goal is to minimize the number of fraudulent transactions that are missed (false negatives)?

68

A data scientist trains a binary classification model to predict whether a loan applicant will default (positive class) or not (negative class). The training data contains 5% default cases. The model predicts 'no default' for every applicant in the test set and achieves 95% accuracy. Which evaluation metric best reveals that the model is failing to identify any default cases?

69

A retail company wants to analyze customer purchase histories to identify natural groups of customers with similar buying patterns. They do not have predefined categories. Which type of machine learning should they use?

70

A real estate company trains a model to predict house prices. They evaluate it on a test set of 100 houses. The model predictions have a mean absolute error (MAE) of $5,000 and a root mean squared error (RMSE) of $20,000. What does the large difference between MAE and RMSE indicate about the model's errors?

71

A data scientist evaluates a regression model that predicts house prices. On the test set, the Mean Absolute Error (MAE) is $8,000 and the Root Mean Squared Error (RMSE) is $25,000. What does the large difference between MAE and RMSE indicate about the model's errors?

72

A data scientist trains a model on historical data and achieves high accuracy on both the training set and a held-out test set. However, when the model is deployed in production, it performs poorly on new, unseen data. Which issue is most likely the cause?

73

A data scientist trains a binary classification model to detect fraudulent transactions. The dataset contains 99% legitimate transactions (negative class) and 1% fraudulent transactions (positive class). The model predicts 'legitimate' for every transaction in the test set and achieves 99% accuracy. Which metric would best reveal that the model is failing to identify any fraudulent transactions?

74

A data scientist trains a regression model to predict house prices. The model achieves very low error on the training data but significantly higher error on a held-out test set. Which problem does this scenario best describe?

75

A data scientist has a dataset with 100 features and 10,000 samples. They want to reduce the number of features while retaining as much variance as possible, to improve model training speed and reduce overfitting. Which technique should they use?

76

A data scientist trains a regression model to predict housing prices. The model uses polynomial features up to degree 5. It achieves an R-squared of 0.95 on the training set but only 0.60 on the test set. Which problem is the model most likely experiencing?

77

A data scientist is training a regression model to predict energy consumption. The dataset includes features like temperature, humidity, time of day, and day of week. After training, the model performs well on the training set but poorly on new data. Which approach would most likely help reduce this problem?

78

A data scientist trains a binary classification model to detect spam emails. The dataset contains 95% legitimate emails (negative class) and 5% spam (positive class). The model predicts all emails as legitimate. The accuracy is 95%, but the model is useless. Which metric would best indicate the model's failure?

79

A data scientist trains a binary classification model to predict loan defaults. The dataset contains 98% non-default cases and only 2% default cases. The model predicts 'non-default' for every instance, achieving 98% accuracy on the test set. Which metric would best reveal that the model fails to identify any actual defaults?

80

A data scientist trains a multiclass classification model to identify different species of flowers (Iris setosa, Iris virginica, Iris versicolor). The overall accuracy is 94%, but the accuracy for the Iris virginica class is only 60%. Which additional metric should the data scientist examine to better understand the model's performance on the minority class?

81

A data scientist trains a multiclass classification model to categorize customer support tickets into three types: 'Billing', 'Technical', and 'General'. The dataset contains 80% 'General', 15% 'Billing', and only 5% 'Technical' tickets. Overall accuracy on a test set is 85%, but the model misclassifies most 'Technical' tickets as 'General'. Which metric would best help the data scientist understand the model's poor performance on the 'Technical' class?

82

A data scientist has trained a binary classification model to detect fraudulent credit card transactions. The dataset contains 99.9% legitimate transactions and only 0.1% fraudulent ones. The model predicts all transactions as legitimate, achieving 99.9% accuracy on the test set. However, the business requires the model to actually catch as many fraudulent transactions as possible. Which metric would best reveal the model's failure to identify fraud?

83

A data scientist trains a model to predict house prices using features like number of bedrooms, square footage, and location. The model achieves a mean absolute error (MAE) of $5,000 on the training data but $25,000 on the test data. Which problem is the model most likely experiencing?

84

A data scientist trains a binary classification model to detect fraudulent credit card transactions. The dataset contains 99.5% legitimate transactions and 0.5% fraudulent transactions. The model predicts every transaction as legitimate and achieves 99.5% accuracy on the test set. Which metric would best reveal that the model is failing to identify any fraudulent transactions?

85

A data science team trains a regression model to predict house prices. They evaluate the model using Mean Absolute Error (MAE). After deployment, they notice that the model occasionally produces large errors (e.g., underpredicting a luxury home by $500,000) while most predictions are within $20,000. The business is more concerned about the impact of these large errors than the average small error. Which additional metric should the team use to better capture the penalty for large errors?

86

A bank uses a machine learning model to predict credit card fraud. The model's output is a probability score. The business wants to minimize the number of false positives (legitimate transactions incorrectly flagged as fraud) because these cause customer dissatisfaction. At the same time, they must also catch most fraudulent transactions. Which metric should the bank optimize to balance these two goals?

87

A data scientist trains a regression model to predict energy consumption for a smart building. The model achieves very low error on the training data but performs significantly worse on a held-out validation set. Which technique would most directly address this problem?

88

A data scientist trains a binary classification model to detect a rare disease. The dataset contains 99% negative cases and only 1% positive cases. The model predicts all cases as negative, achieving an accuracy of 99% on the test set. However, the business requires the model to identify as many positive cases as possible. Which metric should the data scientist examine to best reveal that the model is failing to identify any positive cases?

89

An e-commerce company has a dataset of customer purchase histories with no predefined categories. The data analyst wants to identify natural groupings of customers based on their purchasing behavior to target marketing campaigns. Which type of machine learning should the analyst use?

90

A data scientist trains a regression model to predict house prices using features like bedrooms, square footage, and location. The model achieves an R-squared of 0.95 on the test set. However, when deployed to predict prices in a new city with different property characteristics, the predictions are very inaccurate. Which concept best explains this poor performance?

91

A data scientist trains a regression model to predict house prices. The model performs poorly on both the training data and the test data, showing high error in both sets. Which concept best describes this situation?

92

A data scientist is training a binary classification model to detect rare equipment failures from sensor data. The dataset contains 99.5% normal operation readings and only 0.5% failure readings. The model currently predicts all readings as 'normal' and achieves 99.5% accuracy on the test set. The business requires the model to identify at least 80% of actual failures. Which data-level technique should the data scientist use to most directly address the class imbalance?

93

A hospital deploys a machine learning model to screen patients for a rare disease. Only 0.1% of patients actually have the disease. The model correctly identifies most positive cases but also flags many healthy patients as potentially having the disease. The hospital wants to minimize the number of healthy patients who are incorrectly told they might have the disease. Which metric should the model optimize?

94

A data scientist trains a binary classification model to detect fraudulent transactions. The dataset contains only 2% fraudulent transactions. The model achieves 98% overall accuracy, but it fails to detect any fraudulent transactions, classifying all transactions as legitimate. Which metric would most clearly reveal this failure?

95

A city's traffic department wants to predict the number of cars that will cross a particular bridge each day to plan maintenance schedules. The output of the model should be a numerical value representing the estimated traffic count. Which type of machine learning task is this?

96

A data scientist trains a regression model to predict house prices using features like bedrooms, square footage, and location. The model achieves a low error on the training data but performs significantly worse when used to predict prices in a new city with different property characteristics. Which concept best explains this poor performance?

97

A data scientist trains a decision tree model to predict customer churn. The model achieves 99% accuracy on the training data but only 80% on the test data. Which concept best explains this performance difference?

98

A data scientist trains a model to predict the exact number of cars that will cross a bridge each day for maintenance planning. The model uses historical traffic data as input. Which type of machine learning task is this?

99

A data scientist is developing a classification model to detect fraudulent transactions. The dataset is split into training and test sets. The data scientist repeatedly tunes the model's hyperparameters and evaluates performance on the test set until the test accuracy reaches 95%. However, when the model is deployed on new, unseen data, its accuracy drops to 70%. Which concept best explains this performance degradation?

100

A data scientist trains a regression model to predict daily electricity consumption (in kWh) for a commercial building. The business team needs a metric that heavily penalizes large prediction errors (outliers) more than small errors. Which metric should the data scientist report to best meet this requirement?

Watch out for

Common Describe fundamental principles of machine learning on Azure exam traps

  • IaaS gives you infrastructure control; SaaS gives you only the application.
  • Hybrid cloud combines on-premises and public cloud — not two public clouds.
  • Cloud does not automatically mean cheaper or more secure.
  • Management responsibility shifts with each service model (IaaS → PaaS → SaaS).

Frequently asked questions

What does the Describe fundamental principles of machine learning on Azure domain cover on the AI-900 exam?
Cloud concepts questions usually test the service model (IaaS/PaaS/SaaS) and deployment model (public/private/hybrid/community) appropriate for a given scenario.
How many questions are in this domain?
This page lists all 100 Describe fundamental principles of machine learning on Azure questions in the AI-900 question bank. The actual exam draws from this domain proportionally to its weighting in the official exam blueprint.
What is the best way to practise this domain?
Start with a short focused session (10 questions) to identify gaps, then use the interactive practice page to work through explanations. Repeat with a longer session once the weak areas feel solid.
Can I practise only Describe fundamental principles of machine learning on Azure questions?
Yes — the session launcher on this page filters questions to this domain only. Choose any session length or try the interactive practice page for inline explanations.