A data scientist trains a binary classification model to detect a rare disease. The dataset contains 99% negative cases and only 1% positive cases. The model predicts all cases as negative, achieving an accuracy of 99% on the test set. However, the business requires the model to identify as many positive cases as possible. Which metric should the data scientist examine to best reveal that the model is failing to identify any positive cases?
Answer choices
Why each option matters
Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.
Distractor review
Precision
Precision is the proportion of positive predictions that are actually positive. Since the model made no positive predictions, precision is undefined (0/0), which does not clearly communicate the failure to detect positives.
Best answer
Recall
Recall is the proportion of actual positives correctly predicted. With no positive predictions, recall is 0%, immediately showing the model misses all positive cases.
Distractor review
F1 score
F1 score is the harmonic mean of precision and recall. Since both precision and recall are 0 (or undefined), F1 is 0, but recall alone more directly indicates the model's inability to catch positives.
Distractor review
AUC-ROC
AUC-ROC measures the model's ability to distinguish between classes. A model that always predicts negative has an AUC of 0.5, indicating no discriminative ability, but this metric does not directly reveal that no positives are being caught.
Common exam trap
Common exam trap: NAT rules depend on direction and matching traffic
NAT is not only about the public address. The inside/outside interface roles and the ACL or rule that matches traffic are just as important.
Technical deep dive
How to think about this question
NAT questions usually test address translation, overload/PAT behaviour, static mappings and whether the right traffic is being translated. Read the interface direction and address terms carefully.
KKey Concepts to Remember
- Static NAT maps one inside address to one outside address.
- PAT allows many inside hosts to share one public address using ports.
- Inside local and inside global describe the private and translated addresses.
- NAT ACLs identify traffic for translation, not always security filtering.
TExam Day Tips
- Identify inside and outside interfaces first.
- Check whether the scenario needs static NAT, dynamic NAT or PAT.
- Do not confuse NAT matching ACLs with normal packet-filtering intent.
Related practice questions
Related AI-900 practice-question pages
Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.
More questions from this exam
Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.
Question 1
A developer wants to build a virtual assistant that can understand user intents such as 'Book a flight' or 'Check weather' and extract relevant entities like destination and date. The developer has a small set of labeled example utterances. Which Azure AI Language feature should the developer use?
Question 2
A developer is building a customer support chatbot using Azure OpenAI. The chatbot should never reveal its system instructions or internal configuration. The developer wants to add a rule at the beginning of the conversation to prevent prompt injection attacks. Which technique should they use?
Question 3
A developer is using Azure OpenAI Service to generate product descriptions from technical specifications. The generated descriptions sometimes include plausible-sounding but incorrect details (hallucinations). The developer wants to ensure the model's responses are strictly based on the provided product data and does not add any external or invented information. Which approach should the developer use?
Question 4
A developer is using Azure OpenAI with GPT-4 to build a chatbot that answers legal questions based on a company's internal policy documents. The developer wants the model's responses to be maximally deterministic and factual, avoiding any creative or speculative language. Which parameter should the developer set to the lowest possible value in the API call?
Question 5
A developer is using Azure OpenAI to generate creative product descriptions. The outputs are often repetitive and lack variety. The developer wants to increase the diversity of the generated text while still keeping it coherent. Which parameter should the developer increase?
Question 6
A developer is using Azure OpenAI Service to generate product descriptions. They want the output to be highly focused and deterministic, with less randomness. Which parameter should they decrease?
FAQ
Questions learners often ask
What does this AI-900 question test?
Static NAT maps one inside address to one outside address.
What is the correct answer to this question?
The correct answer is: Recall — Accuracy is misleading because it is dominated by the majority class. Recall (also called sensitivity or true positive rate) measures the proportion of actual positives correctly identified. In this case, recall is 0%, directly showing the model fails to catch any positive cases. Precision would be undefined because no positive predictions were made. F1 is also 0 but recall provides a more direct and intuitive measure of the model's failure to detect positives. AUC-ROC near 0.5 indicates random performance but does not pinpoint the problem as clearly as recall.
What should I do if I get this AI-900 question wrong?
Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.
Discussion
Sign in to join the discussion.