mediummultiple choiceObjective-mapped

An order processing workflow uses Amazon SQS as the decoupling layer between a producer and a consumer Lambda function. The consumer intermittently fails due to a downstream dependency. The team has observed that certain “poison” messages keep being retried repeatedly and prevent other messages from being processed efficiently. Which SQS configuration most directly addresses this issue?

Question 1mediummultiple choice
Full question →

An order processing workflow uses Amazon SQS as the decoupling layer between a producer and a consumer Lambda function. The consumer intermittently fails due to a downstream dependency. The team has observed that certain “poison” messages keep being retried repeatedly and prevent other messages from being processed efficiently. Which SQS configuration most directly addresses this issue?

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Distractor review

Set the SQS queue’s retention period to 10 years and rely on application retries to eventually succeed.

Retention affects how long messages remain available, but it doesn’t isolate repeatedly failing messages.

B

Distractor review

Increase visibility timeout to a very large value and avoid dead-letter queues to keep ordering stable.

Long visibility time can delay failures, but it does not provide dead-letter isolation for poison messages.

C

Best answer

Configure a redrive policy with a dead-letter queue (DLQ) and set an appropriate visibility timeout greater than the maximum processing time.

A DLQ isolates poison messages after a receive count threshold, and correct visibility timeout prevents premature retries.

D

Distractor review

Switch the queue to FIFO and remove retries in the Lambda event source mapping entirely.

FIFO and disabling retries do not reliably solve poison message isolation without DLQ-based redriving.

Common exam trap

Common exam trap: NAT rules depend on direction and matching traffic

NAT is not only about the public address. The inside/outside interface roles and the ACL or rule that matches traffic are just as important.

Technical deep dive

How to think about this question

NAT questions usually test address translation, overload/PAT behaviour, static mappings and whether the right traffic is being translated. Read the interface direction and address terms carefully.

KKey Concepts to Remember

  • Static NAT maps one inside address to one outside address.
  • PAT allows many inside hosts to share one public address using ports.
  • Inside local and inside global describe the private and translated addresses.
  • NAT ACLs identify traffic for translation, not always security filtering.

TExam Day Tips

  • Identify inside and outside interfaces first.
  • Check whether the scenario needs static NAT, dynamic NAT or PAT.
  • Do not confuse NAT matching ACLs with normal packet-filtering intent.

Related practice questions

Related SAA-C03 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

FAQ

Questions learners often ask

What does this SAA-C03 question test?

Static NAT maps one inside address to one outside address.

What is the correct answer to this question?

The correct answer is: Configure a redrive policy with a dead-letter queue (DLQ) and set an appropriate visibility timeout greater than the maximum processing time. — Poison messages typically cause repeated processing attempts because they keep failing before a durable recovery path exists. SQS redrive policies with a dead-letter queue (DLQ) solve this by moving messages after they exceed a defined receive count. That prevents one bad payload from consuming processing capacity indefinitely. Additionally, setting the visibility timeout appropriately (longer than the maximum successful processing time) ensures messages aren’t prematurely returned to the queue and retried in a tight loop. Why others are wrong: Extending retention (A) doesn’t isolate failures; it just keeps failing messages around longer. Option B may reduce retry frequency temporarily, but it still keeps poison messages from being quarantined and addressed separately. Option D may change ordering and retry behavior, but without DLQ redriving it lacks the direct mechanism to isolate poison messages after repeated failures.

What should I do if I get this SAA-C03 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.