mediummulti selectObjective-mapped

A serverless checkout API has predictable traffic spikes every weekday at 09:00 UTC and low traffic the rest of the day. The team wants to reduce cost while keeping response times fast during the recurring spike. Which two actions should they take? Select two.

Question 1mediummulti select
Full question →

A serverless checkout API has predictable traffic spikes every weekday at 09:00 UTC and low traffic the rest of the day. The team wants to reduce cost while keeping response times fast during the recurring spike. Which two actions should they take? Select two.

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Best answer

Use provisioned concurrency for the Lambda function during the expected spike window.

Provisioned concurrency keeps Lambda execution environments initialized before traffic arrives, which reduces cold starts during the predictable busy period. Because the spike is scheduled, the team can pay for the performance benefit only when it is actually needed.

B

Best answer

Use Application Auto Scaling or scheduled actions to reduce provisioned concurrency after the spike ends.

Scaling provisioned concurrency back down after the spike avoids paying for idle pre-initialized environments during the low-traffic period. Matching concurrency spend to the business schedule is the cost-optimized part of the design.

C

Distractor review

Replace the API with a single always-on EC2 instance.

A single EC2 instance removes the serverless elasticity and usually increases operational overhead because the team must patch and maintain the host. It also introduces a weaker availability model than the managed serverless design.

D

Distractor review

Keep provisioned concurrency permanently high all day and all week.

This wastes money because the pre-warmed Lambda environments would sit idle during the low-traffic hours. The scenario specifically describes a predictable spike, so scheduled provisioning is the better fit.

E

Distractor review

Disable API Gateway and use direct public internet access to Lambda.

Direct public access is not the normal secure pattern for exposing Lambda and does not address the cost-versus-latency tradeoff. API Gateway provides a managed front door for routing, throttling, and security controls.

Common exam trap

Common exam trap: NAT rules depend on direction and matching traffic

NAT is not only about the public address. The inside/outside interface roles and the ACL or rule that matches traffic are just as important.

Technical deep dive

How to think about this question

NAT questions usually test address translation, overload/PAT behaviour, static mappings and whether the right traffic is being translated. Read the interface direction and address terms carefully.

KKey Concepts to Remember

  • Static NAT maps one inside address to one outside address.
  • PAT allows many inside hosts to share one public address using ports.
  • Inside local and inside global describe the private and translated addresses.
  • NAT ACLs identify traffic for translation, not always security filtering.

TExam Day Tips

  • Identify inside and outside interfaces first.
  • Check whether the scenario needs static NAT, dynamic NAT or PAT.
  • Do not confuse NAT matching ACLs with normal packet-filtering intent.

Related practice questions

Related SAA-C03 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

FAQ

Questions learners often ask

What does this SAA-C03 question test?

Static NAT maps one inside address to one outside address.

What is the correct answer to this question?

The correct answer is: Use provisioned concurrency for the Lambda function during the expected spike window. — Because the traffic spike is predictable, the team should provision Lambda concurrency only during the busy window and then scale it back down afterward. That keeps response times fast when demand rises while avoiding unnecessary spend during the rest of the day. This is a standard cost-versus-latency optimization for scheduled serverless workloads. Replacing the serverless API with a single EC2 instance usually increases operational work and can reduce resilience. Keeping provisioned concurrency high all day wastes money. Direct public access to Lambda is not the usual secure deployment pattern and does not solve the cold-start or cost problem. The best answer is schedule-based concurrency management.

What should I do if I get this SAA-C03 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.