mediummultiple choiceObjective-mapped

A website serves mostly cacheable images, CSS, and JavaScript from an ALB. Users in Europe and Asia report slower page loads, and the ALB receives far more requests than expected. The team also wants text assets compressed automatically. Which change is the best first step?

Question 1mediummultiple choice
Full question →

A website serves mostly cacheable images, CSS, and JavaScript from an ALB. Users in Europe and Asia report slower page loads, and the ALB receives far more requests than expected. The team also wants text assets compressed automatically. Which change is the best first step?

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Distractor review

Increase the ALB size and add more target instances behind it.

Scaling the ALB and application servers may help capacity, but it does not reduce round-trip distance for global users or cache static content closer to them. The origin would still receive most requests, so latency and origin load remain higher than necessary. This treats the symptom rather than the architecture problem.

B

Distractor review

Use Route 53 latency-based routing to send users to the nearest ALB.

Latency-based routing can direct users to different regional endpoints, but it does not provide edge caching for static assets. Every request would still traverse to an origin, which means the ALB continues to handle far more traffic than needed. It also adds complexity that is unnecessary when a caching CDN can solve both latency and load reduction.

C

Best answer

Place Amazon CloudFront in front of the ALB and enable compression and caching.

CloudFront is the right choice because it caches static content at edge locations close to users, reducing latency and lowering the number of requests that reach the ALB. It also supports compression for text-based assets such as CSS, JavaScript, and HTML. This improves both performance and origin offload without changing the application logic.

D

Distractor review

Replace the ALB with an NLB to reduce latency for web requests.

An NLB is excellent for TCP and ultra-high throughput use cases, but it is not the best service for HTTP content optimization, path-based behavior, or caching static assets. The workload described is web content delivery, where edge caching and compression matter more than load-balancer protocol flexibility. NLB also does not reduce origin requests the way a CDN can.

Common exam trap

Common exam trap: NAT rules depend on direction and matching traffic

NAT is not only about the public address. The inside/outside interface roles and the ACL or rule that matches traffic are just as important.

Technical deep dive

How to think about this question

NAT questions usually test address translation, overload/PAT behaviour, static mappings and whether the right traffic is being translated. Read the interface direction and address terms carefully.

KKey Concepts to Remember

  • Static NAT maps one inside address to one outside address.
  • PAT allows many inside hosts to share one public address using ports.
  • Inside local and inside global describe the private and translated addresses.
  • NAT ACLs identify traffic for translation, not always security filtering.

TExam Day Tips

  • Identify inside and outside interfaces first.
  • Check whether the scenario needs static NAT, dynamic NAT or PAT.
  • Do not confuse NAT matching ACLs with normal packet-filtering intent.

Related practice questions

Related SAA-C03 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

FAQ

Questions learners often ask

What does this SAA-C03 question test?

Static NAT maps one inside address to one outside address.

What is the correct answer to this question?

The correct answer is: Place Amazon CloudFront in front of the ALB and enable compression and caching. — CloudFront is the best first step because the workload is dominated by cacheable web assets. By placing CloudFront in front of the ALB, the team can serve content from edge locations that are geographically closer to users, which lowers latency and drastically reduces origin traffic. Enabling compression for text assets further improves page load times and bandwidth efficiency. This is the most effective architectural improvement for the stated goals. Why others are wrong: Bigger ALBs and more instances add capacity but do not solve global latency or origin overuse. Route 53 latency routing only directs traffic; it does not cache static files. NLB is not the right fit for a web-delivery optimization problem because it lacks the caching and content acceleration benefits needed here. CloudFront directly addresses both user experience and origin offload.

What should I do if I get this SAA-C03 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.