mediummultiple choiceObjective-mapped

A latency-sensitive API is implemented with AWS Lambda. The team enabled provisioned concurrency to avoid cold starts, setting provisioned concurrency to 50 because marketing campaigns occasionally cause spikes. However, during most weekdays the API receives little traffic (near zero), and the team is seeing high monthly Lambda costs from idle provisioned capacity. What is the best cost-optimized strategy that still meets the requirement of fast initial responses during traffic spikes?

Question 1mediummultiple choice
Full question →

A latency-sensitive API is implemented with AWS Lambda. The team enabled provisioned concurrency to avoid cold starts, setting provisioned concurrency to 50 because marketing campaigns occasionally cause spikes. However, during most weekdays the API receives little traffic (near zero), and the team is seeing high monthly Lambda costs from idle provisioned capacity. What is the best cost-optimized strategy that still meets the requirement of fast initial responses during traffic spikes?

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Distractor review

Increase provisioned concurrency to 100 so that cold starts never occur, regardless of traffic patterns.

Higher provisioned concurrency increases cost and does not address idle spend during low traffic periods.

B

Best answer

Use Application Auto Scaling scheduled actions to increase provisioned concurrency on the Lambda alias before campaign windows and reduce it to a minimal baseline afterward.

Provisioned concurrency is billed while allocated, even when idle. Scheduling higher provisioned concurrency only during known spike windows reduces idle cost while preserving fast startup behavior during campaigns.

C

Distractor review

Turn provisioned concurrency off permanently and rely on retries at the client side to mask cold starts.

Disabling provisioned concurrency reintroduces cold starts and may degrade user experience, especially during spikes.

D

Distractor review

Replace Lambda with a single always-on EC2 instance sized for peak demand to eliminate cold starts.

Always-on instances can be more expensive during idle periods and contradict the serverless cost tradeoff goal.

Common exam trap

Common exam trap: usable hosts are not the same as total addresses

Subnetting questions often tempt you into counting all addresses. In normal IPv4 subnets, the network and broadcast addresses are not usable host addresses.

Technical deep dive

How to think about this question

Subnetting questions test whether you can identify the network, broadcast address, usable range, mask and correct subnet. Slow down enough to calculate the block size correctly.

KKey Concepts to Remember

  • CIDR notation defines the prefix length.
  • Block size helps identify subnet boundaries.
  • Network and broadcast addresses are not usable hosts in normal IPv4 subnets.
  • The required host count determines the smallest suitable subnet.

TExam Day Tips

  • Write the block size before choosing the subnet.
  • Check whether the question asks for hosts, subnets or a specific address range.
  • Do not confuse /24, /25, /26 and /27 host counts.

Related practice questions

Related SAA-C03 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

FAQ

Questions learners often ask

What does this SAA-C03 question test?

CIDR notation defines the prefix length.

What is the correct answer to this question?

The correct answer is: Use Application Auto Scaling scheduled actions to increase provisioned concurrency on the Lambda alias before campaign windows and reduce it to a minimal baseline afterward. — Provisioned concurrency is billed while it is allocated, even when there is little or no traffic. A cost-optimized strategy is to scale provisioned concurrency in line with predictable demand patterns. If campaigns or other spike windows are known, use scheduled scaling on the Lambda alias to raise provisioned concurrency before the campaign and lower it afterward. This preserves fast initial response times during spikes while avoiding idle provisioned capacity for the rest of the month. Increasing provisioned concurrency increases idle cost and does not leverage demand predictability. Turning it off permanently removes the cold-start mitigation requirement. Migrating to always-on EC2 treats the symptom by adding steady cost, which is typically worse for mostly-idle workloads and reduces the benefit of serverless elasticity.

What should I do if I get this SAA-C03 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.