hardmultiple choiceObjective-mapped

A data scientist trains a binary classification model to detect fraudulent transactions. The dataset contains only 2% fraudulent transactions. The model achieves 98% overall accuracy, but it fails to detect any fraudulent transactions, classifying all transactions as legitimate. Which metric would most clearly reveal this failure?

Question 1hardmultiple choice
Full question →

A data scientist trains a binary classification model to detect fraudulent transactions. The dataset contains only 2% fraudulent transactions. The model achieves 98% overall accuracy, but it fails to detect any fraudulent transactions, classifying all transactions as legitimate. Which metric would most clearly reveal this failure?

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Distractor review

Precision

Precision would be low or undefined because the model predicts no positive cases, but recall more directly shows the inability to find fraud.

B

Best answer

Recall

Recall (true positive rate) is 0 when no fraudulent transactions are identified, exposing the model's failure.

C

Distractor review

F1 score

F1 score is the harmonic mean of precision and recall; it would be 0 but does not pinpoint the issue as clearly as recall alone.

D

Distractor review

Specificity

Specificity measures true negative rate, which would be high (98%) and thus hide the failure to detect fraud.

Common exam trap

Common exam trap: usable hosts are not the same as total addresses

Subnetting questions often tempt you into counting all addresses. In normal IPv4 subnets, the network and broadcast addresses are not usable host addresses.

Technical deep dive

How to think about this question

Subnetting questions test whether you can identify the network, broadcast address, usable range, mask and correct subnet. Slow down enough to calculate the block size correctly.

KKey Concepts to Remember

  • CIDR notation defines the prefix length.
  • Block size helps identify subnet boundaries.
  • Network and broadcast addresses are not usable hosts in normal IPv4 subnets.
  • The required host count determines the smallest suitable subnet.

TExam Day Tips

  • Write the block size before choosing the subnet.
  • Check whether the question asks for hosts, subnets or a specific address range.
  • Do not confuse /24, /25, /26 and /27 host counts.

Related practice questions

Related AI-900 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

Question 1

A developer wants to build a virtual assistant that can understand user intents such as 'Book a flight' or 'Check weather' and extract relevant entities like destination and date. The developer has a small set of labeled example utterances. Which Azure AI Language feature should the developer use?

Question 2

A developer is building a customer support chatbot using Azure OpenAI. The chatbot should never reveal its system instructions or internal configuration. The developer wants to add a rule at the beginning of the conversation to prevent prompt injection attacks. Which technique should they use?

Question 3

A developer is using Azure OpenAI Service to generate product descriptions from technical specifications. The generated descriptions sometimes include plausible-sounding but incorrect details (hallucinations). The developer wants to ensure the model's responses are strictly based on the provided product data and does not add any external or invented information. Which approach should the developer use?

Question 4

A developer is using Azure OpenAI with GPT-4 to build a chatbot that answers legal questions based on a company's internal policy documents. The developer wants the model's responses to be maximally deterministic and factual, avoiding any creative or speculative language. Which parameter should the developer set to the lowest possible value in the API call?

Question 5

A developer is using Azure OpenAI to generate creative product descriptions. The outputs are often repetitive and lack variety. The developer wants to increase the diversity of the generated text while still keeping it coherent. Which parameter should the developer increase?

Question 6

A developer is using Azure OpenAI Service to generate product descriptions. They want the output to be highly focused and deterministic, with less randomness. Which parameter should they decrease?

FAQ

Questions learners often ask

What does this AI-900 question test?

CIDR notation defines the prefix length.

What is the correct answer to this question?

The correct answer is: Recall — Overall accuracy is misleading when classes are imbalanced. Recall (also called sensitivity or true positive rate) measures how many actual positive (fraudulent) cases were correctly identified. If the model fails to detect any fraudulent transactions, recall will be 0, clearly showing the problem. Precision would be undefined (0/0) or 0 if no fraud predicted. F1 score would be 0. Specificity measures true negatives and would be high (98%), masking the failure to detect fraud. Therefore, recall is the metric that best reveals the failure.

What should I do if I get this AI-900 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.