hardmultiple choiceObjective-mapped

A company uses a generative AI model to create blog posts. They want to ensure that the model's output never contains offensive or harmful language before the content is published. They implement a system that checks the generated text against a list of prohibited terms and blocks or edits the content if necessary. Which type of safety measure is this?

Question 1hardmultiple choice
Full question →

A company uses a generative AI model to create blog posts. They want to ensure that the model's output never contains offensive or harmful language before the content is published. They implement a system that checks the generated text against a list of prohibited terms and blocks or edits the content if necessary. Which type of safety measure is this?

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Distractor review

Pre-training data cleaning

Pre-training data cleaning removes harmful examples from the training data, but it does not filter content generated by the already-trained model during inference.

B

Distractor review

Prompt engineering with safety instructions

Prompt engineering instructs the model to avoid harmful content, but the model may still occasionally generate such content, and it does not guarantee post-generation filtering.

C

Best answer

Post-processing content filtering

Post-processing content filtering checks the generated text after it is produced and applies rules or classifiers to block or modify offensive content before it is published.

D

Distractor review

Model fine-tuning on safe examples

Fine-tuning the model on safe examples reduces the likelihood of harmful outputs, but it does not provide a real-time filtering mechanism for every generated response.

Common exam trap

Common exam trap: answer the scenario, not the keyword

Many certification questions include familiar terms but test a specific constraint. Read the exact wording before choosing an answer that is generally true but wrong for this case.

Technical deep dive

How to think about this question

This question should be treated as a scenario, not a definition check. Identify the problem, the constraint and the best action. Then compare each option against those facts.

KKey Concepts to Remember

  • Read the scenario before looking for a memorised answer.
  • Find the constraint that changes the correct option.
  • Eliminate answers that are true in general but not in this case.
  • Use explanations to understand the rule behind the answer.

TExam Day Tips

  • Underline the problem statement mentally.
  • Watch for words such as best, first, most likely and least administrative effort.
  • Review why wrong options are wrong, not only why the correct option is correct.

Related practice questions

Related AI-900 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

Question 1

A developer wants to build a virtual assistant that can understand user intents such as 'Book a flight' or 'Check weather' and extract relevant entities like destination and date. The developer has a small set of labeled example utterances. Which Azure AI Language feature should the developer use?

Question 2

A developer is building a customer support chatbot using Azure OpenAI. The chatbot should never reveal its system instructions or internal configuration. The developer wants to add a rule at the beginning of the conversation to prevent prompt injection attacks. Which technique should they use?

Question 3

A developer is using Azure OpenAI Service to generate product descriptions from technical specifications. The generated descriptions sometimes include plausible-sounding but incorrect details (hallucinations). The developer wants to ensure the model's responses are strictly based on the provided product data and does not add any external or invented information. Which approach should the developer use?

Question 4

A developer is using Azure OpenAI with GPT-4 to build a chatbot that answers legal questions based on a company's internal policy documents. The developer wants the model's responses to be maximally deterministic and factual, avoiding any creative or speculative language. Which parameter should the developer set to the lowest possible value in the API call?

Question 5

A developer is using Azure OpenAI to generate creative product descriptions. The outputs are often repetitive and lack variety. The developer wants to increase the diversity of the generated text while still keeping it coherent. Which parameter should the developer increase?

Question 6

A developer is using Azure OpenAI Service to generate product descriptions. They want the output to be highly focused and deterministic, with less randomness. Which parameter should they decrease?

FAQ

Questions learners often ask

What does this AI-900 question test?

Read the scenario before looking for a memorised answer.

What is the correct answer to this question?

The correct answer is: Post-processing content filtering — Post-processing content filtering applies safety checks after the model generates output. It scans the text for prohibited content and either blocks, edits, or flags it before delivery. Pre-training data cleaning would improve the model during training but not filter live outputs. Prompt engineering with safety instructions encourages safer outputs but can be bypassed. Model fine-tuning on safe examples reduces harmful tendencies but does not guarantee perfect filtering of all generated content.

What should I do if I get this AI-900 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.