mediummulti selectObjective-mapped

A public API currently uses API Gateway REST APIs and Lambda. Traffic is low most of the day, but marketing runs a predictable traffic spike every weekday at 09:00 UTC. Users complain about cold starts during the first few minutes of the spike, and the team wants to avoid paying for provisioned concurrency all day. Which two changes should they make? Select two.

Question 1mediummulti select
Full question →

A public API currently uses API Gateway REST APIs and Lambda. Traffic is low most of the day, but marketing runs a predictable traffic spike every weekday at 09:00 UTC. Users complain about cold starts during the first few minutes of the spike, and the team wants to avoid paying for provisioned concurrency all day. Which two changes should they make? Select two.

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Best answer

Switch from REST APIs to HTTP APIs if the feature set is sufficient.

HTTP APIs are generally lower cost and lower latency than REST APIs for many simple API use cases. They reduce the recurring API Gateway cost without requiring a redesign.

B

Best answer

Schedule Lambda provisioned concurrency shortly before the spike and scale it back afterward.

Scheduled provisioned concurrency targets the predictable burst window instead of paying for warm capacity all day. It balances latency improvement with controlled spend.

C

Distractor review

Keep provisioned concurrency at the maximum level 24/7.

Always-on provisioned concurrency removes cold starts, but it also creates continuous cost even when traffic is low. That directly conflicts with the goal of avoiding unnecessary spend.

D

Distractor review

Move the API to a single t3.nano EC2 instance.

A single tiny instance creates an availability and scaling bottleneck, and it shifts operational burden back to the team. It is not a cost-optimized or resilient replacement for serverless.

E

Distractor review

Add an S3 gateway endpoint to reduce cold starts.

S3 gateway endpoints help with private S3 access and NAT avoidance, not Lambda cold starts. This option addresses a different problem entirely.

Common exam trap

Common exam trap: NAT rules depend on direction and matching traffic

NAT is not only about the public address. The inside/outside interface roles and the ACL or rule that matches traffic are just as important.

Technical deep dive

How to think about this question

NAT questions usually test address translation, overload/PAT behaviour, static mappings and whether the right traffic is being translated. Read the interface direction and address terms carefully.

KKey Concepts to Remember

  • Static NAT maps one inside address to one outside address.
  • PAT allows many inside hosts to share one public address using ports.
  • Inside local and inside global describe the private and translated addresses.
  • NAT ACLs identify traffic for translation, not always security filtering.

TExam Day Tips

  • Identify inside and outside interfaces first.
  • Check whether the scenario needs static NAT, dynamic NAT or PAT.
  • Do not confuse NAT matching ACLs with normal packet-filtering intent.

Related practice questions

Related SAA-C03 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

FAQ

Questions learners often ask

What does this SAA-C03 question test?

Static NAT maps one inside address to one outside address.

What is the correct answer to this question?

The correct answer is: Switch from REST APIs to HTTP APIs if the feature set is sufficient. — The team needs lower steady-state cost and better first-request latency during a predictable spike. HTTP APIs are cheaper than REST APIs when the required feature set is enough for the application. Scheduling provisioned concurrency only around the known busy window reduces cold starts without paying for idle warm capacity all day. This combination is a strong cost/performance tradeoff for serverless workloads. Keeping provisioned concurrency on all day wastes money, and moving the service to a single tiny EC2 instance introduces operational and scaling risk. An S3 endpoint does not address Lambda startup latency, so it is unrelated to the problem.

What should I do if I get this SAA-C03 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.