Microsoft · Free Practice Questions · Last reviewed May 2026

AI-900 Exam Questions and Answers

30 real exam-style questions organised by domain, each with the correct answer highlighted and a plain-English explanation of why it's right — and why the others are wrong.

60 exam questions
60 min time limit
Pass at 700 / 1000
5 exam domains
1

Domain 1: Describe Artificial Intelligence workloads and considerations

All Describe Artificial Intelligence workloads and considerations questions

A bank is developing an AI system to automatically approve personal loans. To ensure the system does not discriminate against any group of applicants, which Microsoft responsible AI principle should the bank primarily focus on?

A

Accountability

B

Inclusiveness

C

Fairness

Fairness is the principle that AI systems should treat all people equitably and avoid bias, making it the correct focus for preventing discrimination in loan approvals.

D

Reliability and Safety

Why: Fairness ensures that AI systems treat all people and groups equitably, avoiding bias and discrimination by design. Accountability means taking responsibility for the system's outcomes, but the question specifically asks about preventing discrimination. Inclusiveness empowers everyone, and Reliability & Safety ensures systems work correctly under all conditions, but Fairness directly addresses bias.

A manufacturing company uses an AI system to predict when machines will need maintenance. The system must work correctly under varying factory floor conditions such as temperature changes and noise levels. Which Microsoft responsible AI principle is most directly focused on ensuring the system performs reliably in these different conditions?

A

Fairness

B

Reliability & Safety

This principle directly ensures that AI systems perform consistently and safely across a range of conditions, which matches the requirement for reliable operation in different factory environments.

C

Privacy & Security

D

Inclusiveness

Why: The Reliability & Safety principle ensures AI systems operate reliably, safely, and consistently under expected conditions. In this scenario, the system's ability to function across different factory environments directly relates to reliability and safety. The other principles address different concerns: Fairness is about avoiding discrimination, Privacy & Security protects data, and Inclusiveness ensures the system is usable by diverse users.

A data scientist is training a credit risk model and wants to use Azure Machine Learning's Responsible AI dashboard to identify if the model is biased against a certain demographic group. Which component of the dashboard should they use to evaluate this?

A

Model Interpretability

B

Model Fairness Assessment

This component analyzes model predictions across predefined sensitive groups to identify and measure unfair bias.

C

Error Analysis

D

Data Balance Analysis

Why: The Responsible AI dashboard includes multiple components. The Model Fairness Assessment component specifically evaluates disparities in model predictions across sensitive groups (e.g., gender, ethnicity) to detect bias. Model Interpretability helps explain why predictions are made but does not directly measure bias. Error Analysis identifies data slices where the model makes more errors but does not specifically focus on demographic bias. Data Balance Analysis examines the distribution of features in the dataset but does not measure model bias; it can indicate potential data imbalances that might lead to bias but does not evaluate the model's outputs. Therefore, the correct component for bias evaluation is Model Fairness Assessment.

A healthcare start-up proposes a fully automated AI system to diagnose patients from medical scans without any human doctor review. They claim the system is 99% accurate. According to Microsoft's responsible AI principles, which principle is most directly violated by removing human oversight from this critical decision-making process?

A

Fairness

B

Reliability and safety

C

Transparency

D

Accountability

Accountability demands that AI systems are designed with appropriate human oversight to ensure responsible use and to handle edge cases. Fully automating diagnosis removes human accountability.

Why: Microsoft's responsible AI principle of Accountability stresses that humans should be responsible for AI systems and that human oversight is necessary, especially in high-risk domains like healthcare. Removing doctors from the loop violates this principle. Fairness, reliability, and transparency are also important but not as directly impacted as accountability in this scenario.

A financial services company uses an AI system to recommend personalized investment portfolios. A customer requests an explanation of why a particular investment was recommended. Which Microsoft responsible AI principle is primarily focused on ensuring the company can provide this explanation?

A

Accountability

B

Transparency

Transparency requires that AI systems are understandable and that users can obtain meaningful explanations for decisions, which is exactly what the customer is asking for.

C

Fairness

D

Reliability

Why: Transparency in AI ensures that users can understand how decisions are made. In this scenario, providing an explanation for a recommendation directly aligns with the transparency principle. Accountability holds the organization responsible, fairness avoids bias, and reliability ensures consistent performance, but none specifically address the need for explainability as transparency does.

A healthcare organization is developing an AI system to recommend treatment plans for patients based on their medical history. According to Microsoft's responsible AI principles, which principle is most directly concerned with ensuring that the system protects patients' health data from unauthorized access or misuse?

A

Privacy and security

This principle requires AI systems to respect privacy, store data securely, and protect it from unauthorized access or misuse, which aligns directly with protecting patient data.

B

Transparency

C

Fairness

D

Reliability and safety

Why: The privacy and security principle of responsible AI emphasizes that AI systems should be built with strong safeguards to protect personal and sensitive data. For healthcare applications, patient data is highly confidential, and this principle directly addresses the need for security measures to prevent breaches or unauthorized use. Transparency is about explainability, not data protection. Fairness is about avoiding bias, and reliability is about consistent performance. Therefore, privacy and security is the correct answer.

Want more Describe Artificial Intelligence workloads and considerations practice?

Practice this domain
2

Domain 2: Describe fundamental principles of machine learning on Azure

All Describe fundamental principles of machine learning on Azure questions

A data scientist wants to train a machine learning model to predict the exact market price of a house based on features such as square footage, number of bedrooms, and location. Which type of machine learning task should be used?

A

Classification

B

Regression

Regression predicts a continuous numeric value, which is exactly what is needed for predicting house price.

C

Clustering

D

Anomaly Detection

Why: Regression is used when the target variable is a continuous numeric value, such as price. Classification predicts categories (e.g., 'expensive' or 'cheap'), Clustering groups similar data without labels, and Anomaly Detection identifies unusual data points. Since house price is a continuous number, regression is correct.

A data scientist has trained a binary classification model to predict whether an email is spam (positive) or not spam (negative). On a test set, the model correctly identifies 90 out of 100 actual spam emails and 80 out of 100 actual non-spam emails. Which metric shows the proportion of actual spam emails that the model correctly predicted?

A

A. Precision

B

B. Recall

Correct. Recall = true positives / (true positives + false negatives) = 90 / (90 + 10) = 0.9, exactly the proportion of actual spam correctly identified.

C

C. F1 Score

D

D. Accuracy

Why: Recall (also called sensitivity) measures the fraction of positive instances correctly identified. For this model, recall = 90 / (90 + 10) = 0.9. Accuracy is overall correctness, precision measures correctness of positive predictions, and F1 combines precision and recall.

A retail company wants to predict which customers are likely to stop using their service. They have a dataset with many customer attributes including age, income, purchase history, website activity, and support interactions. They suspect some features are redundant. Which technique should they use to reduce the number of features while preserving as much information as possible?

A

Normalization

B

Principal Component Analysis (PCA)

PCA summarizes data by creating new uncorrelated variables (principal components) that capture most of the variance, effectively reducing dimensionality.

C

One-hot encoding

D

Regression analysis

Why: Principal Component Analysis (PCA) is an unsupervised dimensionality reduction technique that transforms the original correlated features into a smaller set of uncorrelated principal components, retaining as much variance (information) as possible. Normalization (scaling) does not reduce the number of features; it only changes their scale. One-hot encoding increases the number of features by creating binary columns for categorical values. Regression analysis is a predictive modeling technique, not a feature reduction method. Therefore, PCA is the correct choice.

A retail company wants to automatically group its customers into distinct segments based on their purchasing patterns, without having pre-defined categories. The goal is to discover natural groupings in the customer data to tailor marketing campaigns. Which type of machine learning task should the company use?

A

Supervised learning - Classification

B

Unsupervised learning - Clustering

Clustering is an unsupervised learning technique that groups similar data points together based on features, without needing labels. This fits the scenario of discovering natural customer segments from purchasing patterns.

C

Reinforcement learning

D

Supervised learning - Regression

Why: Unsupervised learning is used when the data has no labels and the model must find patterns or groupings on its own. Clustering is a common unsupervised technique to segment customers. Supervised learning requires labeled data, which is not available here. Reinforcement learning is for decision-making in environments with rewards. Regression predicts a numeric value, not groups.

A hospital has a dataset with historical patient records, each labeled as either 'readmitted within 30 days' or 'not readmitted'. The hospital wants to train a model to predict which current patients are likely to be readmitted. Which type of machine learning task is this?

A

Supervised regression

B

Supervised classification

Classification is used when the target variable is a category, and the data is labeled. Here, the output is one of two classes – readmitted or not readmitted.

C

Unsupervised clustering

D

Reinforcement learning

Why: This is a supervised classification task because the model is trained on a dataset with known binary labels (readmitted or not) to predict categorical outcomes. Regression predicts a continuous numeric value, unsupervised clustering groups unlabeled data, and reinforcement learning learns through rewards/punishments from interactions with an environment.

A data scientist trains a machine learning model to predict housing prices. On the training data, the model achieves an R-squared value of 0.99, but on a separate validation dataset it achieves an R-squared of only 0.65. What is the most likely issue with this model?

A

Overfitting

Overfitting occurs when the model learns the training data too well, capturing noise and making it perform poorly on new, unseen data, as shown by the large gap between training and validation performance.

B

Underfitting

C

High bias

D

Insufficient training data

Why: When a model performs exceptionally well on training data but poorly on unseen validation data, it is a classic sign of overfitting. The model has memorized the training set, including noise and irrelevant patterns, instead of learning generalizable relationships. Underfitting would show poor performance on both sets. A high bias would also cause poor training performance. This scenario clearly indicates overfitting.

Want more Describe fundamental principles of machine learning on Azure practice?

Practice this domain
3

Domain 3: Describe features of computer vision workloads on Azure

All Describe features of computer vision workloads on Azure questions

A transportation company wants to automatically identify whether an image contains a car, a truck, or a motorcycle. The system should output a single label for the entire image. Which computer vision capability in Azure should they use?

A

Object detection

B

Image classification

Image classification assigns one or more labels to the entire image, matching the requirement to identify the type of vehicle shown.

C

Optical Character Recognition (OCR)

D

Semantic segmentation

Why: Image classification assigns a label (or multiple labels) to an entire image. Object detection identifies and locates multiple objects within an image with bounding boxes. OCR extracts text from images. Semantic segmentation classifies each pixel. Since the requirement is to give a single label per image, image classification is correct.

A manufacturing company wants to use Azure AI to detect surface defects on metal parts. The team has a small set of labeled images of defective and non-defective parts, and images will be taken under various lighting conditions and angles. They need a solution that can leverage a pre-trained model and adapt it to their specific defect types with minimal new training data. Which approach should they take?

A

A. Use Custom Vision to train a classification or object detection model with transfer learning

Correct. Custom Vision uses transfer learning from pre-trained models, enabling effective training with a small dataset to detect specific defects.

B

B. Use the Optical Character Recognition (OCR) API

C

C. Use the Describe Image API (Image Captioning)

D

D. Use the Face API

Why: Custom Vision in Azure provides transfer learning, where you start with a pre-trained model and fine-tune it on your own small dataset. This is ideal for specialized tasks like defect detection with limited data. The OCR API extracts text, Describe Image generates captions, and Face API recognizes faces—none are suited for detecting custom defects.

A logistics company receives thousands of handwritten shipping labels each day. They want to use Azure AI to automatically read the handwritten addresses and convert them into digital text. Which Azure Cognitive Services capability should they use?

A

Image classification

B

Optical character recognition (OCR)

OCR extracts text from images, including handwritten content, and is ideal for this scenario.

C

Object detection

D

Face detection

Why: Optical character recognition (OCR) is a computer vision capability that extracts printed or handwritten text from images. It is the appropriate choice for converting handwritten addresses into digital text. The other options serve different purposes: Image classification assigns a label to the entire image, object detection identifies objects, and face detection finds faces.

A logistics warehouse uses a conveyor belt system to move packages. They need to automatically read the alphanumeric serial numbers printed on labels attached to each box. The labels may have different fonts and be somewhat dusty. Which Azure Computer Vision feature should they use?

A

Image Classification

B

Optical Character Recognition (OCR) using the Read API

The Read API extracts text from images and is robust to various fonts and image quality issues. It can return the serial number as a string, making it ideal for this use case.

C

Object Detection

D

Image Analysis (captioning and tagging)

Why: The Azure Computer Vision Read API (OCR) is designed to extract printed and handwritten text from images, including different fonts and less-than-perfect conditions. Image classification labels an entire image with a single category. Object detection locates objects within an image. Image analysis provides descriptive captions but does not extract specific characters.

A retail company wants to build a system that can verify the identity of customers by comparing their live photo with an uploaded government-issued ID photo. Which Azure Computer Vision service should they use to perform the face comparison?

A

Azure Computer Vision - Image Analysis

B

Azure Face API

Face API offers face verification, which checks if a live photo matches a reference photo (e.g., the ID photo) by comparing facial features.

C

Azure Custom Vision

D

Azure Form Recognizer

Why: Azure Face API provides face detection, verification, and identification capabilities. Verification checks if two faces belong to the same person by comparing features. Image Analysis extracts generic visual features but does not compare faces. Custom Vision is used to train custom image classifiers. Form Recognizer extracts text from documents, not faces.

A retail warehouse uses a camera system to locate and count boxes on shelves. The system needs to output the exact positions of each box by drawing a rectangular frame around it in the image. Which Azure Computer Vision capability should they use?

A

Object detection

Object detection finds objects and returns their bounding boxes, which is precisely what is needed to locate and frame each box in an image.

B

Image classification

C

Semantic segmentation

D

Optical Character Recognition (OCR)

Why: Object detection identifies objects within an image and returns their positions as bounding box coordinates. This exactly matches the requirement of drawing a rectangular frame around each box. Image classification only assigns a label to the whole image, not per-object location. Semantic segmentation labels every pixel and produces masks, not bounding boxes. Optical character recognition (OCR) extracts text, not object locations.

Want more Describe features of computer vision workloads on Azure practice?

Practice this domain
4

Domain 4: Describe features of Natural Language Processing workloads on Azure

All Describe features of Natural Language Processing workloads on Azure questions

A healthcare organization needs to extract specific data elements (such as patient names, medication dosages, and dates) from unstructured doctors' notes. Which Azure Cognitive Service is best suited for this task?

A

Language Understanding (LUIS)

B

Text Analytics

Text Analytics includes Named Entity Recognition, which can extract predefined categories of entities (e.g., Person, Date, Quantity) from unstructured text, making it ideal for this task.

C

Translator Text

D

Speech

Why: Text Analytics provides Named Entity Recognition (NER) to identify and extract entities like people, dates, and quantities from text. LUIS is for intent and entity extraction from conversational utterances. Translator Text translates between languages. Speech handles audio transcription. The scenario requires extracting structured information from free-text notes, so Text Analytics is the correct service.

A hospital wants to create a system that can transcribe doctor-patient conversations in real time and also extract medical conditions, medications, and dosages from the transcribed text. Which combination of Azure AI services should they use?

A

Speech to Text and Text Analytics API (standard)

B

Speech to Text and Text Analytics for Health

Speech to Text provides real-time transcription, and Text Analytics for Health is specifically designed to extract medical concepts from clinical text.

C

Translator Text and Language Understanding (LUIS)

D

Speaker Recognition and Question Answering

Why: The correct combination is Azure Speech to Text for real-time transcription and Azure Text Analytics for Health to extract medical entities. Speech to Text converts audio to text. Text Analytics for Health is a specialized version of the Text Analytics service that understands medical terminology and can extract conditions, medications, dosages, and other clinical information. The standard Text Analytics API does not have healthcare-specific entity recognition. Translator Text and LUIS are for translation and language understanding, not transcription or medical extraction. Speaker Recognition identifies speakers, and Question Answering provides answers from knowledge bases.

A customer service team wants to build an Azure AI-powered bot that can understand the intent behind customer messages. For example, the bot should recognize that 'I want to return my shoes' maps to a 'ReturnItem' intent, and 'Where is my order?' maps to 'TrackOrder'. Which Azure service provides pre-built models specifically for intent recognition?

A

Language Understanding (LUIS)

LUIS (part of Azure Language service) is designed for intent recognition and entity extraction from conversational utterances. It provides pre-built models for common intents.

B

Text Analytics

C

Translator Text

D

Speech-to-text

Why: Language Understanding (LUIS) is the Azure service specialized in extracting intents and entities from natural language, making it suitable for building a conversational bot with intent recognition. Text Analytics provides sentiment analysis and entity recognition, not intent. Translator Text is for translation. Speech-to-text converts audio to text.

An online news platform receives thousands of articles daily. The editors want to automatically identify the most important topics discussed in each article to help with content categorization. Which Azure Text Analytics capability should they use?

A

Sentiment Analysis

B

Key Phrase Extraction

Key phrase extraction returns a list of the most important phrases or topics in the text. This directly matches the requirement to identify important topics from articles.

C

Named Entity Recognition

D

Language Detection

Why: Key phrase extraction identifies the main points or central topics in a text. Sentiment analysis determines if the text is positive, negative, or neutral. Entity recognition extracts named entities like people, places, or dates. Language detection identifies the language of the text. Only key phrase extraction directly gives the key topics of the article.

A company's HR department wants to create a self-service bot that can answer employee questions about company policies. They have a collection of policy documents in PDF format. Which Azure AI Language feature should they use to ingest these documents and enable the bot to provide answers based on them?

A

Sentiment Analysis

B

Key Phrase Extraction

C

Custom Question Answering

Custom Question Answering allows you to build a knowledge base by ingesting documents (e.g., PDFs) and then answers questions by extracting relevant passages from that knowledge base.

D

Language Detection

Why: Custom Question Answering (part of Azure AI Language) allows you to create a knowledge base from existing documents (PDF, Word, etc.) and answer questions in natural language. Sentiment Analysis detects sentiment, Key Phrase Extraction identifies main topics, and Language Detection identifies the language of text – none of these build a question-answering system from documents.

A retail company collects thousands of customer reviews. They want to automatically extract frequently mentioned aspects (e.g., 'battery life', 'customer service', 'price') to understand common topics. Which Azure AI Language capability should they use?

A

Sentiment analysis

B

Key phrase extraction

Key phrase extraction identifies the main topics, opinions, and themes in text, making it ideal for extracting frequently mentioned aspects like product features.

C

Named entity recognition

D

Language detection

Why: Key phrase extraction is the Azure AI Language feature designed to identify and extract the main points or topics from text. In this scenario, it directly addresses the need to find frequently mentioned aspects in reviews. Sentiment analysis measures overall positive or negative tone, not specific aspects. Named entity recognition identifies people, places, and organizations, which may not cover general topics. Language detection identifies the language of the text, not the content themes.

Want more Describe features of Natural Language Processing workloads on Azure practice?

Practice this domain
5

Domain 5: Describe features of generative AI workloads on Azure

All Describe features of generative AI workloads on Azure questions

A marketing team wants to use Azure AI to automatically generate unique product descriptions for thousands of items in an e-commerce catalog based on a few keywords provided by the inventory team. Which Azure service should they use?

A

A. Azure OpenAI Service

Correct. Azure OpenAI Service offers powerful generative language models (e.g., GPT-4) that can produce text from prompts, perfectly suited for generating product descriptions from keywords.

B

B. Azure Computer Vision

C

C. Language Understanding (LUIS)

D

D. Azure Machine Learning

Why: Azure OpenAI Service provides access to generative language models like GPT-4 that can produce coherent and varied text from simple prompts, making it ideal for generating product descriptions at scale. Other options: Azure Computer Vision is for image analysis, LUIS is for language understanding, and Azure Machine Learning is a broader platform that would require building a custom model.

A company is developing a chatbot that can both answer customer questions in natural language and create images on demand (e.g., 'Generate a picture of a product prototype'). Which combination of Azure generative AI models should they integrate?

A

A. GPT-4 for text and DALL-E for images

Correct. GPT-4 handles conversational text, and DALL-E generates images from text prompts, making this the ideal combination for the described chatbot.

B

B. GPT-3 for text and Custom Vision for images

C

C. BERT for text and OCR for images

D

D. Language Understanding (LUIS) and Face API

Why: GPT-4 (via Azure OpenAI Service) excels at natural language understanding and generation, while DALL-E (also via Azure OpenAI Service) generates images from text descriptions. Together they enable the described capabilities. Other combinations involve computer vision tasks (Custom Vision, OCR, Face API) that do not generate images from scratch.

A game development company uses Azure OpenAI Service to automatically generate in-game dialog for non-player characters (NPCs) based on character profiles. They need to ensure the generated text does not contain offensive language or harmful suggestions. Which Azure OpenAI Service feature should they configure to prevent this?

A

Content filters

Azure OpenAI Service includes configurable content filters that can block harmful, offensive, or inappropriate content in generated outputs.

B

Model deployment

C

Token limit

D

Prompt engineering

Why: Content filters in Azure OpenAI Service allow users to set safety thresholds to block harmful content. This is a key feature for responsible deployment of generative AI. Model deployment is about hosting, token limit controls length, prompt engineering guides output but does not guarantee safety.

A company uses Azure OpenAI Service to generate marketing copy for social media posts. They want to prevent the model from producing content that contains offensive language, harmful stereotypes, or violent themes that go against their brand guidelines. Which feature should the company configure within Azure OpenAI Service?

A

Fine-tuning the model with a custom dataset

B

Configuring the content filtering (responsible AI filters)

Azure OpenAI’s content filtering system is a built-in safeguard that automatically screens inputs and outputs for categories like hate, violence, sexual content, and self-harm. Companies can configure severity levels to prevent undesirable content from being generated.

C

Increasing the token limit per response

D

Using prompt engineering techniques

Why: Azure OpenAI Service includes a content filtering system that detects and blocks harmful categories of content such as hate, violence, sexual, and self-harm. Fine-tuning adapts a model to a specific task but does not guarantee blocking undesired outputs. Prompt engineering can reduce harmful outputs by careful phrasing, but is not a reliable safety mechanism alone. Token limits restrict the length of output, not the nature of the content.

A company uses Azure OpenAI Service to power a chat-based support assistant. They have extensive knowledge base documents that contain the correct information. The company wants the assistant to answer questions solely based on the provided documents and avoid generating plausible-sounding but incorrect information. Which approach should they implement to minimize the risk of such fabrications?

A

Retrieval Augmented Generation (RAG) — provide relevant document excerpts as context in the prompt

RAG supplies the model with pertinent knowledge from the documents at query time, ensuring the answer is grounded in the provided content and significantly reducing hallucinations.

B

Increase the temperature parameter to 1.0 to force more creative responses

C

Fine-tune the model on the knowledge base documents using supervised learning

D

Use prompt engineering with a system message that tells the model to never make up facts

Why: Language models can sometimes generate text that is factually incorrect but sounds convincing, known as hallucination. Retrieval Augmented Generation (RAG) is a technique where the model is given relevant document excerpts as context before generating an answer. This grounds the response in the provided information, greatly reducing hallucinations. Increasing temperature would make output more random, which is counterproductive. Fine-tuning on the documents does help but is not as focused on retrieval and may still lead to hallucinations for unseen queries. Prompt engineering alone without knowledge retrieval is insufficient to guarantee factual accuracy.

A marketing team uses Azure OpenAI Service to generate multiple variations of a product description from a single prompt. They want the generated descriptions to be more creative and diverse, rather than repetitive. Which parameter should they increase to achieve this?

A

Temperature

Increasing temperature makes the model more likely to choose less likely tokens, leading to more creative and diverse outputs.

B

Max tokens

C

Top probability

D

Frequency penalty

Why: Temperature controls the randomness of the model's output. A higher temperature (e.g., 0.9) makes the output more creative and diverse, as it increases the probability of sampling lower-probability tokens. Max tokens sets the length of the response, not diversity. Top probability (nucleus sampling) filters tokens by cumulative probability but does not increase randomness. Frequency penalty reduces repetition by penalizing tokens that have already appeared, which can help diversity but is less direct than temperature for overall creativity.

Want more Describe features of generative AI workloads on Azure practice?

Practice this domain

Frequently asked questions

How many questions are on the AI-900 exam?

The AI-900 exam has up to 60 questions and must be completed in 60 minutes. The passing score is 700/1000.

What types of questions appear on the AI-900 exam?

The AI-900 exam uses multiple-choice, multiple-select, drag-and-drop, and exhibit-based questions. Exhibit questions show CLI output, network diagrams, or routing tables and ask you to interpret them — exactly the format Courseiva uses.

How are AI-900 questions organised by domain?

The exam covers 5 domains: Describe Artificial Intelligence workloads and considerations, Describe fundamental principles of machine learning on Azure, Describe features of computer vision workloads on Azure, Describe features of Natural Language Processing workloads on Azure, Describe features of generative AI workloads on Azure. Questions are weighted by domain — higher-weight domains appear more on your actual exam.

Are these the actual AI-900 exam questions?

No. These are original exam-style practice questions written against the official Microsoft AI-900 exam objectives. They are not copied from the real exam. Courseiva focuses on genuine understanding, not memorisation of braindumps.

Ready to practice all 60 AI-900 questions?

Courseiva tracks your accuracy per domain and routes you toward weak areas automatically. Free, no account required.