AI-900 domain

Describe Artificial Intelligence workloads and considerations

Use this page to practise AI-900 Describe Artificial Intelligence workloads and considerations practice questions. The goal is not to memorise dumps, but to understand the concept, review the explanation and improve your exam readiness.

100 questions

Focused practice

Start a Describe Artificial Intelligence workloads and considerations session

All sessions draw only from this domain. Pick a length or try interactive practice with inline explanations.

Start 20-question practice session →

What the exam tests

What to know about Describe Artificial Intelligence workloads and considerations

Describe Artificial Intelligence workloads and considerations questions test whether you can apply the concept in context, not just recognise a definition.

How the topic appears in realistic exam-style scenarios.

Which detail in the question changes the correct answer.

How to eliminate plausible but wrong options.

How to connect the question back to the wider exam objective.

Question index

All Describe Artificial Intelligence workloads and considerations questions (100)

Click any question to see the full explanation, or start a practice session above.

1

A bank is developing an AI system to automatically approve personal loans. To ensure the system does not discriminate against any group of applicants, which Microsoft responsible AI principle should the bank primarily focus on?

2

A manufacturing company uses an AI system to predict when machines will need maintenance. The system must work correctly under varying factory floor conditions such as temperature changes and noise levels. Which Microsoft responsible AI principle is most directly focused on ensuring the system performs reliably in these different conditions?

3

A data scientist is training a credit risk model and wants to use Azure Machine Learning's Responsible AI dashboard to identify if the model is biased against a certain demographic group. Which component of the dashboard should they use to evaluate this?

4

A healthcare start-up proposes a fully automated AI system to diagnose patients from medical scans without any human doctor review. They claim the system is 99% accurate. According to Microsoft's responsible AI principles, which principle is most directly violated by removing human oversight from this critical decision-making process?

5

A financial services company uses an AI system to recommend personalized investment portfolios. A customer requests an explanation of why a particular investment was recommended. Which Microsoft responsible AI principle is primarily focused on ensuring the company can provide this explanation?

6

A healthcare organization is developing an AI system to recommend treatment plans for patients based on their medical history. According to Microsoft's responsible AI principles, which principle is most directly concerned with ensuring that the system protects patients' health data from unauthorized access or misuse?

7

A healthcare clinic uses an AI system to triage patients by urgency. The system consistently assigns lower priority to patients presenting with rare symptoms compared to those with common symptoms, even when the rare symptoms indicate a serious condition. The clinic wants to ensure the system treats all patients equitably. According to Microsoft's Responsible AI principles, which principle is most directly relevant to addressing this disparity?

8

A city government deploys an AI system that automatically detects traffic violations (e.g., running red lights) from traffic camera footage. The system triggers fines without immediate human review. According to Microsoft's responsible AI principles, which principle is most directly concerned with ensuring there is human oversight and that the organization can be held liable for the system's decisions?

9

A retail company uses an AI system to predict customer churn based on demographic and behavioral data. The team discovers that the model gives disproportionately higher churn predictions for customers from a particular zip code, even when their behavior is similar to others. Which Microsoft responsible AI principle is most directly relevant to addressing this issue?

10

A global e-commerce company develops a chatbot to assist customers in multiple languages. The chatbot uses text-based responses. To ensure it serves diverse populations fairly, which Microsoft responsible AI principle should they prioritize?

11

A bank deploys an AI system that uses a deep neural network to approve personal loan applications. A customer whose loan was rejected requests a detailed explanation of why the decision was made. The bank's AI team realizes that the model's internal workings are too complex to provide a simple, understandable reason. According to Microsoft's responsible AI principles, which principle is most directly violated by this situation?

12

An e-commerce company deploys an AI-powered robot for warehouse inventory management. The robot uses computer vision to navigate and pick items. In certain lighting conditions, the robot misidentifies empty shelves and attempts to pick items that are not there, causing damage. According to Microsoft's Responsible AI principles, which principle is most directly concerned with ensuring the robot performs correctly and safely under expected conditions?

13

A healthcare research organization publishes an AI system that diagnoses skin conditions from images. In a study, they discover that the model's accuracy is significantly lower for people with darker skin tones compared to those with lighter skin tones. According to Microsoft's Responsible AI principles, which principle most directly requires the organization to disclose this limitation in their documentation?

14

A city deploys an AI system that automatically issues parking fines based on camera images. A citizen disputes a fine, claiming the system misidentified their car. The city cannot provide an explanation of how the system reached its decision because the model is too complex to interpret. Which Microsoft responsible AI principle is most directly violated?

15

A multinational corporation deploys an AI-powered language translation system that performs well for English, Spanish, and French, but has significantly lower accuracy for Swahili and Navajo. The company wants to ensure the system serves all users equitably. Which Microsoft responsible AI principle is most directly relevant to this scenario?

16

A financial company develops an AI system that recommends loan amounts based on historical data. The historical data includes years of discriminatory lending practices against certain minority groups. As a result, the AI system disproportionately denies loans to members of those groups. Which Microsoft responsible AI principle is most directly violated by this scenario?

17

A hospital deploys an AI system to predict patient readmission risk using historical health records. To protect patient privacy, the hospital wants to ensure that individual patients cannot be identified from the data used for training. Which responsible AI principle is most directly relevant to this requirement?

18

A company develops an AI system to screen job applications. The system is intended to be used by candidates who may have visual, hearing, or motor impairments. The company wants to ensure that the interface is accessible to all candidates regardless of disability. Which Microsoft responsible AI principle should they prioritize?

19

A company develops an AI-powered virtual assistant for customer service. To ensure the assistant can be used by people with visual impairments, the team integrates screen reader compatibility. Which Microsoft responsible AI principle is most directly addressed by this action?

20

An autonomous vehicle company uses an AI system for navigation. During testing, the system performs well in sunny weather but fails in snowy conditions because the training data had very few examples of snowy roads. The company decides to deploy the system anyway, hoping it will learn on the road. Which Microsoft responsible AI principle is most directly violated by this decision?

21

A university deploys an AI model to predict which students are at risk of dropping out. The predictions are used to offer targeted support. Students who may be negatively impacted by this prediction have the right to understand how the model arrived at its decision. Which Microsoft responsible AI principle is most directly relevant?

22

A retail company deploys an AI system that analyzes customer purchase history to personalize product recommendations. Without informing customers, the system also uses their names, addresses, and phone numbers to create detailed profiles. A customer advocacy group raises concerns about this practice. Which Microsoft responsible AI principle is most directly violated?

23

A bank deploys an AI system to approve loan applications. The system was trained on historical data that contains systematic biases against certain ethnic groups. Despite awareness of this bias, the bank proceeds with deployment, expecting the system to correct itself over time. Which Microsoft responsible AI principle is most directly violated?

24

A hospital deploys an AI diagnostic system that achieves 95% accuracy overall. However, for patients from a specific minority ethnic group, the accuracy drops to 60%. The hospital decides to continue using the system because the overall accuracy is acceptable. Which Microsoft responsible AI principle is most directly violated by this decision?

25

A bank deploys an AI system to automatically approve or reject loan applications. After six months, an audit reveals that the system approves loans at a significantly lower rate for applicants from a specific ethnic group compared to other groups with similar financial profiles. Which Microsoft responsible AI principle is most directly violated by this outcome?

26

A development team creates an AI chatbot for a hospital website that answers patient queries. The team scripts the AI to always respond with a disclaimer that it is not a substitute for professional medical advice. Additionally, they include a mechanism for users to report inaccurate responses, which are then reviewed by a human team. Which Microsoft responsible AI principle is most directly being implemented by the reporting and human review mechanism?

27

A company deploys an AI system to screen job resumes and rank candidates. The company wants to ensure that candidates can understand how the system arrived at its decisions. Which Microsoft responsible AI principle is most directly addressed by this requirement?

28

A university uses an AI system to screen scholarship applications. The system was trained on historical data that mostly awarded scholarships to students from STEM majors. Consequently, the system consistently gives lower scores to equally qualified students from humanities and arts majors. Which Microsoft responsible AI principle is most directly being violated by this outcome?

29

A company develops an AI system to screen job resumes and rank candidates for interviews. The system is trained on historical hiring data that favored candidates from certain well-known universities. The company decides to deploy the system without any adjustments to address this bias. Which Microsoft responsible AI principle is most directly being violated?

30

A company develops an autonomous vehicle AI system. The system was trained exclusively on data from sunny, dry weather conditions. When the vehicles are deployed in a region that experiences frequent snow and fog, the system fails to correctly identify obstacles, leading to safety risks. Which Microsoft responsible AI principle is most directly violated by this deployment?

31

A healthcare company deploys an AI system to assist doctors in diagnosing skin conditions from images. The system is a deep neural network that does not provide explanations for its predictions. The company implements a process where every AI recommendation is logged, and a medical team reviews any adverse outcomes to determine if the system or a human made an error. The company also clearly assigns responsibility for the system's outputs to a specific clinical oversight committee. Which Microsoft responsible AI principle is most directly being implemented by these actions?

32

A company builds an AI system to filter job applications and rank candidates. The system is trained on historical hiring data. To reduce potential bias, the company removes protected attributes such as gender and ethnicity from the training data. However, after deployment, the system still shows a statistically significant bias against female candidates. Which Microsoft responsible AI principle most directly requires the company to investigate and address this remaining bias, even when protected attributes are removed?

33

A hospital uses an AI system to analyze patient records for research. To protect patient identities, the system should not store or transmit any personally identifiable information (PII) outside the secure network. Which responsible AI principle is most directly addressed by this requirement?

34

A city deploys an AI-powered kiosk to help residents access government services. The kiosk uses a voice interface only, without any text or screen reader support. Which Microsoft responsible AI principle is most directly being ignored?

35

An insurance company uses an AI system to automatically process and approve or reject claims. The system sometimes rejects valid claims because the uploaded documents are in slightly different formats (e.g., PDF vs. scanned images). The company wants to minimize these errors. Which Microsoft responsible AI principle is most directly relevant to addressing this issue?

36

A company develops an AI system to screen job candidates based on their resumes. The system is trained on historical data. Analysis reveals that the model has an adverse impact against female candidates due to a proxy feature (e.g., 'years of continuous employment') that correlates with gender. The team removes the protected attribute 'gender' from the training data but the biased outcome persists. According to Microsoft's responsible AI principles, which additional step should the team take to address this unfairness?

37

A hospital deploys an AI system to assist in diagnosing diseases from medical images. The system is a complex deep learning model that provides a diagnosis without any explanation. Doctors are skeptical and want to understand why the system made a particular recommendation. The hospital decides to deploy the system without providing any interpretability. Which Microsoft responsible AI principle is most directly being violated?

38

A hospital uses an AI system to prioritize patient appointments based on urgency. The system is trained on historical data. The team wants to ensure that the system does not discriminate against patients based on age or disability. Which Microsoft responsible AI principle should most directly guide the design of this system?

39

A company develops an AI system to recommend personalized news articles to users. The system uses collaborative filtering, suggesting articles that similar users have read. Which type of machine learning does this approach primarily rely on?

40

A hospital uses an AI system to prioritize emergency room patients based on severity. The system was trained on historical data that may contain biases against certain demographic groups. The hospital wants to ensure the system does not disproportionately disadvantage any group. According to Microsoft's responsible AI principles, which practice should the hospital implement during the design phase?

41

A retail store wants to use an AI solution to automatically monitor security camera feeds and detect when a shelf is empty or if a person is in a restricted area. Which type of AI workload is best suited for this task?

42

A hospital deploys an AI system to assist doctors in interpreting MRI scans. The system highlights the regions of interest and provides a numeric confidence score for its findings, along with a list of the image features that contributed to the diagnosis. Which responsible AI principle is being applied?

43

A bank is developing an AI system to automatically approve or reject small business loan applications. The bank wants to ensure that the system does not unfairly discriminate against applicants based on their age, gender, or ethnicity. Which Microsoft responsible AI principle should most directly guide the design and evaluation of this system?

44

A bank deploys an AI system to approve personal loans. The system uses a complex deep learning model that produces a decision (approve or reject) without any explanation of why. Loan applicants who are rejected are not given any reason. According to Microsoft's responsible AI principles, which principle is most directly violated by this system?

45

A hospital uses an AI system to analyze patient health records for research. The hospital must ensure that all patient data is stored securely and only authorized personnel can access it. Which Microsoft responsible AI principle is most directly relevant?

46

A self-driving car company develops an AI system that is highly accurate in testing but fails to consistently detect pedestrians during heavy rain. Which Microsoft responsible AI principle is most directly violated?

47

A company wants to implement an AI solution that treats all users fairly regardless of their background. Which Microsoft responsible AI principle does this requirement primarily address?

48

A company is developing an AI system to recommend movies to users. The team wants to ensure that the recommendations do not discriminate based on gender or ethnicity. Which Microsoft responsible AI principle is most directly related to this goal?

49

A global e-commerce company is designing an AI-powered chatbot to assist customers. They want to ensure the chatbot can be used by people with diverse abilities, including those who use screen readers or speak different languages. Which Microsoft responsible AI principle is most directly related to this requirement?

50

A healthcare company develops an AI system to recommend treatment plans. The system sometimes provides recommendations that contradict standard medical guidelines, leading to potential patient harm. Which Microsoft responsible AI principle is most directly violated?

51

A financial institution uses an AI model to approve loan applications. A customer is denied a loan and requests an explanation for the decision. The development team cannot explain how the model reached its conclusion because the model is a deep neural network with complex layers. Which Microsoft responsible AI principle is being violated in this scenario?

52

A healthcare organization plans to use AI to analyze patient records for medical research. They must ensure that patient data is protected from unauthorized access during storage and processing. Which Microsoft responsible AI principle is most directly relevant to this requirement?

53

A hospital is developing an AI system to assist doctors in diagnosing diseases from medical images. The system's predictions can influence patient treatment. Which Microsoft responsible AI principle is most important to ensure the system's decisions are accurate and reliable?

54

A bank is developing an AI system to automatically approve or reject small personal loans. To ensure the system treats applicants fairly regardless of race, gender, or age, which Microsoft responsible AI principle is most directly relevant?

55

A healthcare organization deploys an AI system that analyzes patient genetic data to recommend personalized treatments. To ensure patient data is protected from unauthorized access during use, which Microsoft responsible AI principle is most directly relevant?

56

A city government implements an AI system to analyze traffic camera feeds and predict congestion. The system is found to be less accurate for neighborhoods with lower-income populations because historical traffic data from those areas is sparse. Which Microsoft responsible AI principle is most directly relevant to address this issue?

57

A company deploys an AI system to screen job applications. The system is a complex neural network that learns patterns from historical hiring data. A rejected candidate asks for an explanation, but the development team cannot describe how the decision was reached. Which Microsoft responsible AI principle is most directly violated?

58

A financial institution uses an AI model to assess creditworthiness for loan applications. After deployment, they discover that the model assigns higher risk scores to applicants from certain postal codes, which are predominantly low-income minority neighborhoods. The model's predictions are accurate according to historical data, but the bank is concerned about ethical implications. Which Microsoft responsible AI principle is most directly applicable to addressing this issue?

59

A social media company uses an AI system to automatically filter hate speech. After deployment, they discover the system flags posts from a specific ethnic group at a much higher rate than posts from other groups, even when the content is similar. Which Microsoft responsible AI principle is most directly relevant?

60

A city government is planning to deploy an AI system that analyzes security camera footage to detect potential crimes in real-time. Citizens express concerns about privacy and potential misuse. Which Microsoft responsible AI principle should the government prioritize to address these concerns?

61

A hospital deploys an AI system to recommend treatment plans for patients. After deployment, the system is found to have significantly lower accuracy for patients from certain racial and ethnic groups because historical medical data for those groups is sparse. Which Microsoft responsible AI principle should the hospital prioritize to address this issue?

62

A research organization is developing an AI system to assist with medical diagnosis. They want to ensure that if the system makes an error, there is a clear process for auditing and determining responsibility. Which Microsoft responsible AI principle is most relevant?

63

A bank uses an AI system to approve loan applications. The bank wants to ensure that applicants can understand why a loan was approved or rejected. Which Microsoft responsible AI principle is most directly relevant to this requirement?

64

A hospital uses an AI system to recommend patient treatment plans. A doctor questions why the system recommended a specific treatment for a particular patient. Which Microsoft responsible AI principle is most directly relevant to providing the answer?

65

A company develops an AI system to predict employee performance based on work habits. The system uses complex neural networks and its decisions are not easily interpretable. The company wants to ensure that employees can understand why a particular performance prediction was made. Which Microsoft responsible AI principle is most directly relevant?

66

A company plans to use an AI system to analyze employee email communications to identify patterns and improve productivity. The company is concerned about respecting employee boundaries and legal regulations. Which Microsoft responsible AI principle is most important to consider?

67

A corporation deploys an AI system that uses a deep neural network to recommend candidate profiles for job openings. The hiring managers cannot understand why a particular candidate was recommended or not. Which Microsoft responsible AI principle is most directly relevant?

68

A startup develops an AI system that uses images of skin lesions to diagnose skin cancer. The model is trained exclusively on images from dermatology clinics in North America, which primarily feature lighter skin tones. When the system is deployed globally via a mobile app, it shows high accuracy for lighter skin tones but significantly lower accuracy for darker skin tones. Which Microsoft responsible AI principle is most directly violated?

69

A company deploys an AI chatbot on its website to answer customer questions. The company wants to be transparent about the nature of the interaction. Which Microsoft responsible AI principle is most directly relevant to ensuring users know they are communicating with an AI and not a human?

70

An autonomous drone delivery company uses an AI model to navigate. During testing in a new city, the model fails to detect power lines and crashes into them. The company wants to ensure their system is robust to unusual conditions. Which Microsoft responsible AI principle is most directly relevant?

71

A self-driving car company tests its AI navigation system in a new city. The system fails to detect a temporary construction barrier and causes a collision. The company wants to ensure that their AI system is robust to unexpected and unusual environmental conditions. Which Microsoft responsible AI principle is most directly relevant to this requirement?

72

A company is developing an AI voice assistant for children. The assistant must respond with age-appropriate language and avoid providing any harmful instructions. Which Microsoft responsible AI principle is most directly relevant to ensuring the system operates safely?

73

A healthcare organization deploys an AI diagnostic system that was trained primarily on data from patients in one geographic region. When used in other regions with different demographics, the system shows significantly lower accuracy for those populations. Which Microsoft responsible AI principle is most directly violated?

74

A hospital uses an AI system to analyze patient records and provide treatment recommendations. They want to ensure that individual patients cannot be re-identified from the data used to train the model. Which Microsoft responsible AI principle is most directly relevant to this requirement?

75

A bank deploys an AI system that uses a complex deep learning model to approve or reject loan applications. When a loan is rejected, customers demand to know the specific reasons. The bank wants to ensure the AI system operates in a way that allows them to explain its decisions. Which Microsoft responsible AI principle is most directly relevant to this requirement?

76

A company develops an AI system that screens job applications to recommend candidates for interviews. The system consistently recommends male candidates over equally qualified female candidates. Which Microsoft responsible AI principle is most directly violated?

77

A company deploys an AI-powered voice assistant that only supports English. The assistant is used in a country where the official languages are English, French, and Dutch. Many users who speak French or Dutch cannot use the assistant effectively. Which Microsoft responsible AI principle is most directly relevant to this situation?

78

A healthcare organization uses an AI system to predict patient readmission risk. The model was trained on data from a single hospital with a predominantly elderly population. When deployed to a different hospital with a younger demographic, the model's accuracy drops significantly. Which responsible AI principle is most directly violated?

79

A building management company develops an AI system that uses temperature and humidity sensors to automatically adjust the HVAC system. They want to ensure that the system does not inadvertently cause uncomfortable temperature swings for occupants. Which Microsoft responsible AI principle is most directly relevant to this requirement?

80

A company implements an AI system to monitor employee productivity by tracking keystrokes and mouse movements. Employees are not informed that this monitoring is taking place, nor did they consent to it. Which Microsoft responsible AI principle is most directly violated?

81

A company deploys an AI system to screen job applications and recommend candidates for interviews. The system consistently rates male candidates higher than equally qualified female candidates. Which Microsoft responsible AI principle is most directly violated?

82

A large company deploys an AI system to screen job applications and recommend candidates for interviews. After six months, an audit reveals that the system recommends candidates from certain ethnic groups at a much lower rate than others, even when those candidates have similar qualifications. Which Microsoft responsible AI principle is most directly violated?

83

A financial institution uses an AI system to recommend credit limits for new customers. When a customer is declined for a credit limit increase, the customer asks why, but the institution cannot provide any explanation because the model is a complex deep neural network and the decision-making process is opaque. Which Microsoft responsible AI principle is most directly violated?

84

A hospital is deploying an AI system that recommends treatment plans based on patient data. The chief medical officer insists that doctors must be able to understand why the AI recommended a specific treatment. Which Microsoft responsible AI principle is most directly relevant to this requirement?

85

A bank deploys an AI system to approve personal loan applications. After six months, an audit reveals that applicants from certain postal codes receive significantly lower approval rates than applicants from other postal codes, even when their income and credit scores are comparable. Which Microsoft responsible AI principle is most directly violated by this outcome?

86

A manufacturing company deploys an AI system to predict equipment failures from sensor data. They need to ensure the system continues to function correctly even if some sensors malfunction or provide noisy data. Which responsible AI principle is most directly relevant?

87

A retail company develops an AI system that recommends products to customers based on their purchase history. They want to ensure that the recommendations are not biased against any demographic group. Which Microsoft responsible AI principle is most directly relevant?

88

A healthcare research organization uses an AI system to analyze patient medical records for pattern discovery. The organization must ensure that the AI system does not expose individual patient identities when reporting results. Which Microsoft responsible AI principle is most directly relevant?

89

A company deploys an AI system to screen job resumes. The system consistently rejects candidates from a certain university, but the company cannot determine which features led to the decision or how the model arrived at that outcome. Which Microsoft responsible AI principle is most directly violated?

90

A city council deploys an AI system to analyze surveillance footage and automatically issue traffic violation fines. They want to ensure the system does not disproportionately target one type of vehicle (e.g., bicycles over cars) when issuing fines. Which Microsoft responsible AI principle is most directly relevant?

91

A financial services company uses an AI system to detect fraudulent credit card transactions. After deployment, the system incorrectly flags a significant number of legitimate transactions as fraudulent, causing customer dissatisfaction. The company wants to reduce these false positives while still catching most fraudulent transactions. Which Microsoft responsible AI principle should guide their redesign of the system?

92

A company uses an AI system to help screen job applications. The system ranks candidates based on their resumes. The company wants to ensure that if a candidate asks why they were not selected, the company can provide a clear explanation of the factors that influenced the AI's decision. Which Microsoft responsible AI principle is most directly relevant?

93

A bank uses an AI system to approve personal loans. Some customers whose loans were denied have asked for an explanation of why their application was rejected. Which Microsoft responsible AI principle requires the bank to provide these explanations?

94

A hospital deploys an AI system to assist with diagnosing diseases from medical images. A doctor disagrees with the system's diagnosis and overrules it. The hospital wants to document this interaction for legal and audit purposes. Which Microsoft responsible AI principle is most directly relevant?

95

A medical research organization uses an AI system to analyze patient health records to identify patterns in disease progression. They publish a research paper that includes tables of aggregated statistics derived from the data. Later, a researcher discovers that by combining multiple statistics, it is possible to identify individual patients. Which Microsoft responsible AI principle has been most directly compromised?

96

A company uses an AI system to automatically generate personalized email subject lines for marketing campaigns. The system has been trained on historical data that includes biased language patterns. The company wants to ensure the generated subject lines do not reinforce stereotypes based on gender, age, or ethnicity. Which Microsoft responsible AI principle should guide the selection and filtering of training data?

97

A hospital deploys an AI system that predicts patient readmission risk within 30 days of discharge. The model uses features such as age, medical history, and treatment plans. The hospital discovers that the model has a significantly higher false positive rate for patients of a certain ethnic group compared to others, even though the model's overall accuracy is similar across groups. This disparity was not intentional. Which Microsoft responsible AI principle is most directly compromised?

98

An autonomous delivery robot uses AI to navigate sidewalks. The robot occasionally fails to detect pedestrians in low-light conditions, leading to near-collisions. The company wants to ensure the system is robust and safe before wider deployment. Which Microsoft responsible AI principle is most directly relevant?

99

A bank uses an AI system to approve or deny personal loan applications. Several customers whose loans were denied have asked for an explanation of why their application was rejected. Which Microsoft responsible AI principle requires the bank to provide understandable reasons for the AI's decision?

100

A hospital uses an AI system to recommend treatment plans for patients. The system's decision process is complex and not easily understood by doctors. The hospital wants to ensure that doctors can trust and verify the system's recommendations. Which Microsoft responsible AI principle is most directly relevant?

Watch out for

Common Describe Artificial Intelligence workloads and considerations exam traps

  • Answering from memory before reading the full scenario.
  • Missing a constraint such as cost, availability, security, scope or command context.
  • Choosing a broad answer when the question asks for the most specific fix.
  • Ignoring why the wrong options are tempting.

Frequently asked questions

What does the Describe Artificial Intelligence workloads and considerations domain cover on the AI-900 exam?
Describe Artificial Intelligence workloads and considerations questions test whether you can apply the concept in context, not just recognise a definition.
How many questions are in this domain?
This page lists all 100 Describe Artificial Intelligence workloads and considerations questions in the AI-900 question bank. The actual exam draws from this domain proportionally to its weighting in the official exam blueprint.
What is the best way to practise this domain?
Start with a short focused session (10 questions) to identify gaps, then use the interactive practice page to work through explanations. Repeat with a longer session once the weak areas feel solid.
Can I practise only Describe Artificial Intelligence workloads and considerations questions?
Yes — the session launcher on this page filters questions to this domain only. Choose any session length or try the interactive practice page for inline explanations.