A company uses Azure OpenAI Service to generate summaries of long technical documents. They notice that the model sometimes produces summaries that sound plausible but contain factual errors contradicting the source document. Which concept describes this type of error in large language models?
Answer choices
Why each option matters
Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.
Distractor review
Overfitting
Overfitting occurs when a model learns training data too well, including noise, and performs poorly on new data. This is not specific to generative AI producing confident falsehoods.
Best answer
Hallucination
Hallucination is the term for a model generating factually incorrect but seemingly plausible content, a common risk in large language models like those used in Azure OpenAI.
Distractor review
Tokenization
Tokenization is the process of breaking text into tokens (words or subwords) for model input. It does not directly cause factual errors in output.
Distractor review
Bias
Bias refers to systematic unfairness or prejudice in model outputs. While bias can cause inaccuracies, it is not the term for generating false but plausible information.
Common exam trap
Common exam trap: NAT rules depend on direction and matching traffic
NAT is not only about the public address. The inside/outside interface roles and the ACL or rule that matches traffic are just as important.
Technical deep dive
How to think about this question
NAT questions usually test address translation, overload/PAT behaviour, static mappings and whether the right traffic is being translated. Read the interface direction and address terms carefully.
KKey Concepts to Remember
- Static NAT maps one inside address to one outside address.
- PAT allows many inside hosts to share one public address using ports.
- Inside local and inside global describe the private and translated addresses.
- NAT ACLs identify traffic for translation, not always security filtering.
TExam Day Tips
- Identify inside and outside interfaces first.
- Check whether the scenario needs static NAT, dynamic NAT or PAT.
- Do not confuse NAT matching ACLs with normal packet-filtering intent.
Related practice questions
Related AI-900 practice-question pages
Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.
More questions from this exam
Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.
Question 1
A developer wants to build a virtual assistant that can understand user intents such as 'Book a flight' or 'Check weather' and extract relevant entities like destination and date. The developer has a small set of labeled example utterances. Which Azure AI Language feature should the developer use?
Question 2
A developer is building a customer support chatbot using Azure OpenAI. The chatbot should never reveal its system instructions or internal configuration. The developer wants to add a rule at the beginning of the conversation to prevent prompt injection attacks. Which technique should they use?
Question 3
A developer is using Azure OpenAI Service to generate product descriptions from technical specifications. The generated descriptions sometimes include plausible-sounding but incorrect details (hallucinations). The developer wants to ensure the model's responses are strictly based on the provided product data and does not add any external or invented information. Which approach should the developer use?
Question 4
A developer is using Azure OpenAI with GPT-4 to build a chatbot that answers legal questions based on a company's internal policy documents. The developer wants the model's responses to be maximally deterministic and factual, avoiding any creative or speculative language. Which parameter should the developer set to the lowest possible value in the API call?
Question 5
A developer is using Azure OpenAI to generate creative product descriptions. The outputs are often repetitive and lack variety. The developer wants to increase the diversity of the generated text while still keeping it coherent. Which parameter should the developer increase?
Question 6
A developer is using Azure OpenAI Service to generate product descriptions. They want the output to be highly focused and deterministic, with less randomness. Which parameter should they decrease?
FAQ
Questions learners often ask
What does this AI-900 question test?
Static NAT maps one inside address to one outside address.
What is the correct answer to this question?
The correct answer is: Hallucination — Hallucination in large language models refers to the generation of content that is nonsensical or unfaithful to the provided source, often appearing plausible but factually incorrect. This is a known challenge when using generative AI for tasks requiring high accuracy, such as summarization. Understanding hallucination helps in applying mitigations like retrieval-augmented generation (RAG) or fine-tuning with domain-specific data.
What should I do if I get this AI-900 question wrong?
Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.
Discussion
Sign in to join the discussion.