Amazon Web Services · Free Practice Questions · Last reviewed May 2026

CLF-C02 Exam Questions and Answers

24 real exam-style questions organised by domain, each with the correct answer highlighted and a plain-English explanation of why it's right — and why the others are wrong.

65 exam questions
90 min time limit
Pass at 700 / 1000
4 exam domains
1

Domain 1: Cloud Concepts

All Cloud Concepts questions

A company is migrating its on-premises applications to the AWS Cloud. The Chief Security Officer wants to confirm the division of security responsibilities. According to the AWS Shared Responsibility Model, which of the following tasks is the customer's responsibility?

A

Ensuring the physical security of AWS data centers

B

Patching the hypervisor layer that runs Amazon EC2 instances

C

Managing network access control lists (ACLs) for the customer's VPC

Network ACLs are stateless firewall rules that control inbound and outbound traffic at the subnet level within a VPC. Configuring and managing these rules is the customer's responsibility as part of managing security in the cloud.

D

Replacing defective hardware components in the AWS global infrastructure

Why: Under the AWS Shared Responsibility Model, AWS manages the security of the cloud (physical data centers, hypervisor, hardware), while the customer manages security in the cloud (network traffic, guest operating systems, applications, and data). Managing network access control lists (ACLs) is a customer responsibility because it involves configuring network traffic within the customer's virtual private cloud (VPC).

A retail company runs a legacy application on a single on-premises server. The application experiences unpredictable traffic surges that degrade performance. The company is considering migrating to the AWS Cloud. Which cloud computing characteristic MOST directly addresses the ability to automatically adjust resources to meet changing demand without manual intervention?

A

Elasticity

Correct. Elasticity is the ability to automatically provision and release cloud resources in response to changing demand. This directly addresses the company's need to handle traffic surges without manual intervention.

B

Scalability

C

High availability

D

Durability

Why: The characteristic described is elasticity, which allows cloud resources to scale up or down automatically based on real-time demand. This is a core benefit of cloud computing, enabling handling of unpredictable traffic without the need for human intervention. Scalability is a broader concept that includes both manual and automatic scaling, but elasticity specifically refers to the automated adjustment. High availability focuses on system uptime, not dynamic resource adjustment. Durability relates to long-term data preservation, not compute capacity.

A startup is deploying a web application on Amazon EC2 instances across multiple Availability Zones (AZs). The architecture must ensure that the application remains fully operational and available to users even if one entire AZ fails. Which cloud computing concept does this requirement MOST directly represent?

A

Elasticity

B

Fault tolerance

Correct. Fault tolerance describes a system that continues operating without interruption despite the failure of one or more components. Distributing workloads across multiple Availability Zones is a key method to achieve fault tolerance in AWS.

C

Scalability

D

Resource pooling

Why: Fault tolerance is the ability of a system to continue functioning without interruption when one or more of its components fail. Deploying resources across multiple AWS Availability Zones is a standard practice to achieve fault tolerance, as it eliminates a single point of failure at the data center level. While high availability is related, fault tolerance specifically implies no service interruption even in the event of a failure. Elasticity and scalability are about dynamically adjusting resources to meet demand, not about surviving infrastructure failures. Resource pooling refers to the multi-tenant sharing of computing resources across multiple customers.

A mid-size company is planning to migrate its IT infrastructure to the AWS Cloud. The Chief Information Officer (CIO) expresses concern that multiple customers' virtual servers might run on the same physical hardware, potentially increasing the risk of data exposure. Which cloud computing characteristic describes this shared infrastructure model, where computing resources are pooled to serve multiple customers using a multi-tenant model?

A

On-demand self-service

B

Resource pooling

Resource pooling is the cloud characteristic where the provider's computing resources are pooled to serve multiple customers using a multi-tenant model. This directly addresses the CIO's concern about shared physical hardware, and AWS implements strong isolation mechanisms to prevent data exposure.

C

Measured service

D

Broad network access

Why: The concern about multiple customers running on the same physical hardware directly relates to the cloud characteristic of resource pooling. Resource pooling means the provider's computing resources are pooled to serve multiple customers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to customer demand. AWS implements strong isolation controls (e.g., hypervisor-level separation) to ensure that customers cannot access each other's data or workloads. Understanding this characteristic helps customers trust the security of shared infrastructure.

A company based in Germany needs to store and process customer data that, by law, must remain within the European Union (EU). The company plans to use AWS services. Which AWS Global Infrastructure element is the MOST important for the company to evaluate when choosing where to deploy its resources?

A

Availability Zones

B

Edge Locations

C

AWS Regions

AWS Regions are distinct geographic areas that are completely isolated from each other. Choosing a Region within the EU (e.g., eu-central-1 in Frankfurt) ensures that the customer's data remains in the EU, satisfying data residency laws. This is the foundational decision before considering other infrastructure components.

D

Local Zones

Why: AWS Regions are geographically isolated areas that consist of multiple Availability Zones. Data residency regulations require that data remain within a specific geographic boundary. By selecting an AWS Region located within the EU (e.g., Frankfurt or Ireland), the company can meet its compliance requirement. Availability Zones, Edge Locations, and Local Zones are all components of the global infrastructure, but the primary decision for data residency starts with choosing an appropriate Region. Availability Zones exist within a Region and are in the same geographic area, but they do not allow cross-country placement. Edge Locations are used for content caching and do not host primary compute or storage. Local Zones extend a Region but are still part of the parent Region's country.

A company is currently running its IT infrastructure in an on-premises data center. The finance department wants to understand how moving to the AWS Cloud would change the company's cost structure. In particular, they want to avoid large upfront hardware purchases and instead pay only for the resources they consume on a monthly basis. Which key cloud computing concept does this shift represent?

A

Elasticity

B

Economies of scale

C

Pay-as-you-go pricing

Pay-as-you-go is a pricing model where customers pay only for the resources they consume, with no upfront commitments. This directly addresses the finance department's desire to avoid large upfront hardware purchases and shift to a variable monthly expense model.

D

High availability

Why: This scenario describes the transition from a capital expenditure (CapEx) model (large upfront purchases of hardware) to an operational expenditure (OpEx) model (pay-as-you-go for resources consumed). This is a fundamental benefit of cloud computing, often referred to as 'pay-as-you-go' pricing. Elasticity refers to automatic scaling, economies of scale refer to cost advantages from AWS's massive infrastructure, and high availability focuses on system uptime. None of those directly capture the shift from upfront investment to variable monthly spending.

Want more Cloud Concepts practice?

Practice this domain
2

Domain 2: Security and Compliance

All Security and Compliance questions

A company is preparing for an annual compliance audit. The auditor requests a copy of the AWS SOC 2 Type II report to review AWS's controls. Which AWS service or tool can the company use to obtain this report?

A

AWS Config

B

AWS Artifact

AWS Artifact is the correct service. It is a self-service portal for on-demand access to AWS compliance reports and agreements. This allows customers to download reports like SOC 2 Type II directly.

C

AWS Trusted Advisor

D

AWS Security Hub

Why: AWS Artifact is the central resource for compliance-related information. It provides on-demand access to AWS compliance reports, such as SOC reports, PCI DSS reports, and ISO certifications, as well as agreements like the Business Associate Addendum (BAA). This allows customers to download AWS compliance documentation directly without needing to file a support ticket.

A company has deployed multiple EC2 instances with different security groups. The compliance team wants to ensure that no security group allows unrestricted SSH access (0.0.0.0/0) and receive alerts if any such rule is created. Which AWS service can they use to continuously monitor and evaluate the security group configurations against this policy?

A

AWS CloudTrail

B

Amazon GuardDuty

C

AWS Config

AWS Config continuously monitors and records AWS resource configurations and allows you to evaluate them against desired configurations using managed or custom rules. It can detect security groups with unrestricted SSH access and trigger notifications or automatic remediation.

D

AWS Security Hub

Why: AWS Config is the correct service for continuously monitoring and evaluating resource configurations against desired policies. It provides managed rules (e.g., restricted-ssh) that can detect security groups allowing unrestricted SSH access and trigger alerts or remediation. AWS CloudTrail records API calls but does not evaluate configuration state. Amazon GuardDuty detects threats using network and account activity, not configuration compliance. AWS Security Hub aggregates security findings from multiple services but does not itself perform configuration evaluation.

A company uses an IAM role to allow an application running on Amazon EC2 to decrypt data stored in Amazon S3. The security team wants to enforce that the application can only use the decryption permission when the IAM role has a specific tag (e.g., 'Environment=Production'). Which approach should the security team implement to meet this requirement?

A

Add a condition to the KMS key policy that uses the 'kms:RequestTag/ConditionKey' to require the tag on the caller.

B

Add a condition to the IAM role's trust policy that denies the 'kms:Decrypt' action unless the role has the tag.

C

Add a condition to the IAM policy that grants the 'kms:Decrypt' permission with a condition on 'aws:PrincipalTag' to require the tag.

Correct. IAM policies support the 'aws:PrincipalTag' condition key, which checks the tags attached to the IAM principal (user or role) making the request. By adding a condition like 'StringEquals': {'aws:PrincipalTag/Environment': 'Production'} to the IAM policy that grants 'kms:Decrypt', the decryption action is only allowed when the role has the specified tag. This is a form of attribute-based access control (ABAC).

D

Add a condition to the S3 bucket policy that denies all access unless the IAM role has the required tag.

Why: The question tests the ability to use IAM policy conditions to restrict permissions based on the principal's tags. In AWS, you can use the 'aws:PrincipalTag' condition key in an IAM policy to allow or deny actions only when the requesting principal has a specific tag. This is a common pattern for implementing attribute-based access control (ABAC). Option C is correct because it places the condition directly on the IAM policy granting the kms:Decrypt action. Option A is incorrect because KMS key policies use different condition keys (like 'kms:EncryptionContext') and do not directly support 'kms:RequestTag/ConditionKey' for principal tags. Option B is incorrect because a role's trust policy controls who can assume the role, not what actions the role can perform. Option D is incorrect because an S3 bucket policy can restrict access to the bucket itself, but it does not control the KMS decrypt action; KMS permissions are separate.

A company needs to maintain a secure audit trail of all API calls made against its AWS resources. The audit trail must record the identity of the caller, the time of the call, the source IP address, and the request details. The records must be stored securely with integrity guarantees for a minimum of five years to meet compliance requirements. Which AWS service should the company use to capture and store this information?

A

AWS Config

B

Amazon GuardDuty

C

AWS CloudTrail

AWS CloudTrail is the correct service. It records all API calls made to the AWS environment, including details such as the caller's identity, time of the call, source IP address, and request parameters. The logs can be stored durably in Amazon S3 with integrity validation and can be retained for as long as needed.

D

AWS Trusted Advisor

Why: AWS CloudTrail is the service designed to record API activity across AWS services. It logs every API call, including the caller identity, time, source IP, and request parameters. Logs can be stored in Amazon S3 with optional encryption and integrity validation (via log file validation). This meets the requirement for a secure, long-term audit trail. In contrast, AWS Config tracks resource configuration changes, not API calls; Amazon GuardDuty is a threat detection service; and AWS Trusted Advisor provides best practice recommendations and does not capture API call logs.

A financial services company requires all data stored in Amazon S3 to be encrypted at rest. The company has a compliance policy that states encryption keys must be managed entirely by the customer and must never be stored or managed by the cloud provider. Which encryption option should the company use for Amazon S3?

A

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

B

Server-Side Encryption with AWS KMS Customer Managed Keys (SSE-KMS)

C

Server-Side Encryption with Customer-Provided Keys (SSE-C)

Correct. SSE-C allows you to provide your own encryption key with each request. AWS uses the key to encrypt/decrypt the data but does not store the key. This meets the compliance requirement that keys are managed entirely by the customer and are never stored by the cloud provider.

D

Client-Side Encryption using an on-premises key management system

Why: This question tests understanding of Amazon S3 encryption options and the shared responsibility model. AWS offers three server-side encryption options for S3: SSE-S3 (keys managed by AWS), SSE-KMS (keys managed in AWS KMS, with an option for customer managed keys stored in KMS), and SSE-C (customer-provided keys where AWS does not store the key). SSE-C is the only server-side encryption option where the encryption key is never stored by AWS, meeting the requirement that keys are managed entirely by the customer and not stored by the cloud provider. Client-side encryption encrypts data before it reaches S3 and can also meet key management requirements, but it is not an S3 server-side encryption feature and requires additional application changes. Among server-side options, SSE-C is the correct choice.

A company runs a web application on Amazon EC2 that connects to an Amazon RDS database. The database credentials are currently hardcoded in the application configuration file. The security team requires that the credentials be automatically rotated every 90 days and that the application retrieves them securely from a managed service without storing them in the application code. Which AWS service should the company use to meet these requirements?

A

AWS Key Management Service (AWS KMS)

B

AWS Secrets Manager

AWS Secrets Manager is the correct service because it stores database credentials securely, allows retrieval via API calls, and can automatically rotate credentials for supported services like Amazon RDS on a defined schedule (e.g., every 90 days).

C

AWS Systems Manager Parameter Store

D

AWS Certificate Manager (ACM)

Why: AWS Secrets Manager is designed to securely store, retrieve, and automatically rotate secrets such as database credentials, API keys, and other sensitive information. It integrates with Amazon RDS for automatic rotation of credentials without custom coding. AWS KMS manages encryption keys, not secrets. AWS Systems Manager Parameter Store can store secrets but does not natively support automatic rotation without additional custom solutions. AWS Certificate Manager handles SSL/TLS certificates, not database credentials.

Want more Security and Compliance practice?

Practice this domain
3

Domain 3: Cloud Technology and Services

All Cloud Technology and Services questions

A healthcare company needs to store patient medical records that must be retained for 10 years to comply with regulatory requirements. These records are accessed very rarely, only in the event of an audit or legal request. Which Amazon S3 storage class is the MOST cost-effective choice for this data?

A

S3 Standard

B

S3 Intelligent-Tiering

C

S3 One Zone-IA

D

S3 Glacier Deep Archive

S3 Glacier Deep Archive is the lowest-cost S3 storage class, designed for long-term retention of data that is accessed extremely rarely (e.g., once or twice per year). It provides secure and durable storage with retrieval times of 12-48 hours, making it the most cost-effective choice for regulatory archives with a 10-year retention requirement.

Why: Amazon S3 Glacier Deep Archive is designed for long-term retention of data that is accessed extremely rarely, with retrieval times of 12 hours or more. It offers the lowest storage cost among S3 storage classes, making it ideal for compliance archives where data must be kept for years and accessed only a few times per year. S3 Standard is for frequently accessed data and would be too expensive. S3 Intelligent-Tiering automatically moves data between tiers but is not as cost-effective for long-term archival with rare access. S3 One Zone-IA is for infrequently accessed data but does not offer the durability across multiple Availability Zones required for critical compliance data.

A company hosts a static website on Amazon S3. The website serves product images and documents to customers around the world. Users in distant regions report slow load times. The company wants to reduce latency for all users without changing the existing S3 bucket configuration. Which AWS service should the company use?

A

Amazon CloudFront

Correct. CloudFront is a CDN that caches static content at edge locations, reducing latency for global users.

B

AWS Direct Connect

C

Amazon Route 53

D

AWS Global Accelerator

Why: Amazon CloudFront is a content delivery network (CDN) that caches static content at edge locations worldwide. This reduces latency by serving content from locations closer to the user, without requiring any changes to the origin S3 bucket. Direct Connect provides dedicated network connectivity but does not cache content. Route 53 is a DNS service that directs traffic but does not reduce content retrieval latency. Global Accelerator improves performance for TCP/UDP traffic by using edge locations, but it does not cache content; it optimizes the network path for dynamic requests. Therefore, CloudFront is the correct choice for caching static assets globally.

A company is developing a microservices application on AWS. The application includes a front-end web tier and a backend order processing service. The front-end sends order requests to the backend, which may take several seconds to process. The company wants to ensure that the front-end does not wait for the backend to complete, and that no orders are lost if the backend service is temporarily unavailable. Which AWS service should the company use to decouple the front-end and backend?

A

Amazon ElastiCache

B

Amazon Simple Queue Service (SQS)

Amazon SQS is a message queuing service that decouples application components. It allows the front-end to send messages to a queue, which are then processed by the backend independently, ensuring no data loss and asynchronous processing.

C

Amazon Route 53

D

Amazon CloudWatch

Why: Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables decoupling of application components. By placing order requests into an SQS queue, the front-end can continue operating without waiting for the backend. The backend processes messages from the queue at its own pace. If the backend is temporarily unavailable, messages remain safely in the queue until they can be processed. This ensures no orders are lost and the front-end remains responsive. Other services like ElastiCache, Route 53, and CloudWatch do not provide the queue-based decoupling needed for this asynchronous workload.

A development team is building a serverless application that processes image uploads to Amazon S3. The application needs to automatically generate a thumbnail version of each uploaded image and store it in a separate S3 bucket. The team wants to minimize operational overhead and only pay for the compute time used during thumbnail generation. Which AWS service should the team use to execute the thumbnail generation code in response to S3 upload events?

A

Amazon EC2 Auto Scaling group

B

AWS Lambda

AWS Lambda is a serverless compute service that can be triggered directly by S3 events. It runs code only when invoked, scales automatically, and bills only for the compute time used, meeting all the stated requirements.

C

Amazon ECS with Fargate

D

Amazon Elastic Beanstalk

Why: The requirement is for an event-driven, serverless compute service that triggers on S3 upload events and runs only when needed. AWS Lambda is designed for exactly this use case: it runs code in response to events such as S3 object creation, and charges only for the compute time consumed. No servers to manage, and it scales automatically. Amazon EC2 Auto Scaling requires managing instances and is not event-triggered in this direct way. Amazon ECS with Fargate reduces server management but still requires container orchestration and is less straightforward for simple S3 event triggers. Amazon Elastic Beanstalk provides a PaaS environment but is not primarily event-driven and includes ongoing costs even when idle.

A company runs a web application on multiple Amazon EC2 instances that are behind an Application Load Balancer. The operations team wants to ensure that if any EC2 instance fails, a new instance is automatically launched to replace it and maintain a minimum number of running instances. Which AWS service should the company use to meet this requirement?

A

AWS Elastic Load Balancing

B

Amazon EC2 Auto Scaling

Amazon EC2 Auto Scaling is the correct service. It automatically adds or removes EC2 instances based on defined policies or to maintain a desired capacity. If an instance fails, Auto Scaling detects the decrease in healthy capacity and launches a new instance to replace it, ensuring the application remains available.

C

AWS Lambda

D

AWS Auto Scaling

Why: Amazon EC2 Auto Scaling is the AWS service that automatically launches or terminates EC2 instances to maintain a specified number of healthy instances. It can react to instance failures by launching replacements to meet the desired capacity. Elastic Load Balancing distributes traffic but does not manage instance health or replacements. AWS Lambda is a serverless compute service, not for managing EC2 instances. AWS Auto Scaling is a related but broader service; the specific service for EC2 instances is Amazon EC2 Auto Scaling.

A company runs a multiplayer gaming application on Amazon EC2 instances in the us-east-1 Region. The application uses the UDP protocol for real-time communication between players and game servers. Players in Asia and Europe report high latency and packet loss. The company wants to improve performance by directing player traffic from the nearest edge location to the application over the AWS global network, without modifying the application code. Which AWS service should the company use?

A

Amazon CloudFront

B

AWS Global Accelerator

AWS Global Accelerator uses the AWS global network to improve the performance of TCP/UDP applications. It directs user traffic to the nearest edge location and then routes it over the AWS global backbone to the optimal regional endpoint, reducing latency and packet loss. This matches the requirement for UDP-based gaming application without application changes.

C

Amazon Route 53 latency routing

D

AWS Site-to-Site VPN

Why: AWS Global Accelerator is designed to improve the performance of applications that use TCP or UDP by directing traffic to the nearest AWS edge location and then routing it over the AWS global network to the optimal endpoint. This reduces latency and jitter for real-time applications like multiplayer games. Amazon CloudFront is a content delivery network (CDN) optimized for caching HTTP/HTTPS content and does not natively support UDP acceleration. Amazon Route 53 latency routing directs DNS queries to the region with the lowest latency, but it does not use edge locations to accelerate traffic. AWS Site-to-Site VPN is used for connecting on-premises networks to the AWS cloud, not for globally accelerating end-user traffic.

Want more Cloud Technology and Services practice?

Practice this domain
4

Domain 4: Billing, Pricing, and Support

All Billing, Pricing, and Support questions

A company runs multiple workloads on Amazon EC2 instances. They expect consistent usage for the next three years but want the flexibility to change instance families (for example, from M5 to C5) if performance requirements shift. Which AWS pricing model meets these requirements while providing a significant discount over On-Demand pricing?

A

Reserved Instances (Standard)

B

Compute Savings Plans

Compute Savings Plans apply to any EC2 instance family, any size, in any region, and also cover Fargate and Lambda usage. This gives the company the flexibility to change instance families while still receiving a significant discount over On-Demand rates, making it the correct choice.

C

EC2 Instance Savings Plans

D

Spot Instances

Why: Compute Savings Plans are the most flexible discount model for EC2. They apply to any EC2 instance family across any region, allowing changes to instance types without losing the discount. Reserved Instances lock you into a specific instance family and region, while Spot Instances do not guarantee capacity and are not suitable for consistent workloads.

A company wants to proactively monitor its AWS spending and receive email notifications when actual or forecasted costs exceed a defined threshold. The company has a monthly budget of $10,000 and wants to be alerted when costs reach 80% of the budget. Which AWS service should the company use to meet these requirements?

A

AWS Cost Explorer

B

AWS Budgets

AWS Budgets is the correct service because it enables you to set custom cost and usage budgets and define threshold alerts that trigger email notifications (or actions via Amazon SNS) when actual or forecasted costs exceed specified percentages of the budget. This directly meets the requirement for proactive monitoring and alerts at the 80% threshold.

C

AWS Trusted Advisor

D

AWS Consolidated Billing

Why: AWS Budgets allows you to set custom cost and usage budgets and receive alerts when actual or forecasted costs exceed your defined thresholds. In this scenario, the company can create a monthly cost budget of $10,000 and set an alert at 80% ($8,000) to receive email notifications.

A company operates five separate AWS accounts for different business units. The finance team wants to aggregate the usage across all accounts to benefit from volume pricing discounts and to receive a single monthly bill. The company does not need to centrally manage permissions or apply service control policies at this time. Which AWS feature should the company use to meet these requirements?

A

Consolidated Billing through AWS Organizations

Consolidated Billing aggregates usage across multiple accounts, enabling volume discounts and a single monthly invoice. This is the correct feature for the requirement.

B

AWS Cost Explorer

C

AWS Budgets

D

AWS Trusted Advisor

Why: Consolidated Billing is a feature of AWS Organizations that allows you to combine the usage across multiple accounts to receive volume pricing discounts and a single monthly bill. It does not require the use of advanced management features like service control policies or centralized permission management. Cost Explorer is a tool for visualizing and analyzing costs, not for aggregating billing. AWS Budgets allows you to set spending limits and alerts. AWS Trusted Advisor provides best practice checks and recommendations.

A company wants to review its AWS spending for the past six months to identify which services and business units are driving costs. The finance team needs to interactively examine cost trends, filter by service and account, and visualize the data without setting up complex reports. Which AWS service or tool should the company use to meet these requirements?

A

AWS Cost Explorer

AWS Cost Explorer offers a ready-to-use graphical interface to explore and analyze your AWS costs and usage over custom time periods, with filters by service, account, region, and tags. It directly meets the need for interactive trend analysis without requiring additional setup.

B

AWS Budgets

C

AWS Trusted Advisor

D

AWS Cost and Usage Reports

Why: AWS Cost Explorer is the correct choice because it provides a pre-built, interactive dashboard for visualizing, understanding, and managing AWS costs and usage over time. Users can filter by service, linked account, tags, and other dimensions, and view trends without additional setup. AWS Budgets is used for proactive alerts, not historical analysis. AWS Trusted Advisor offers cost optimization checks but lacks the detailed historical trend analysis and interactive visualization. AWS Cost and Usage Reports provides raw data for deep analysis but requires loading into a tool like Amazon Athena or QuickSight, making it less suitable for quick interactive exploration.

A company is launching a critical production application on AWS. The operations team requires technical support with a response time of less than 1 hour for urgent system issues. They also need access to AWS Trusted Advisor best practice checks for cost optimization and security. Which AWS Support plan meets these requirements at the lowest cost?

A

AWS Developer Support

B

AWS Business Support

The Business Support plan offers a 1-hour response for urgent (severity 1) cases and full access to Trusted Advisor best practice checks. This meets the requirements at the lowest cost among plans that satisfy these needs.

C

AWS Enterprise On-Ramp Support

D

AWS Enterprise Support

Why: The AWS Business Support plan provides a response time of less than 1 hour for urgent (severity 1) cases and includes full access to AWS Trusted Advisor best practice checks. The Developer plan has a 12-hour response for urgent issues and limited Trusted Advisor checks. The Enterprise On-Ramp plan offers a 30-minute response but is more expensive than Business. The Enterprise plan offers a 15-minute response and is even more costly. Therefore, the Business plan is the most cost-effective choice that meets both the response time and Trusted Advisor requirements.

A company operates separate AWS accounts for its engineering, marketing, and finance departments. The CFO wants to consolidate billing to receive a single monthly invoice and to benefit from volume pricing discounts. The security team also requires a centralized mechanism to prevent users in any department from launching Amazon EC2 instances outside of the us-east-1 and eu-west-1 Regions to meet data residency compliance. Which AWS service or feature should the company use to meet both requirements?

A

AWS Budgets

B

AWS Organizations with Service Control Policies (SCPs)

AWS Organizations provides consolidated billing for a single invoice and volume discounts, and SCPs allow you to centrally define and enforce permission guardrails (e.g., restricting Regions) across all member accounts. This directly meets both requirements.

C

AWS Identity and Access Management (IAM) cross-account roles

D

AWS Cost and Usage Reports

Why: AWS Organizations allows you to create a hierarchy of AWS accounts, enabling consolidated billing for volume discounts and a single invoice. Additionally, Organizations provides Service Control Policies (SCPs), which can centrally restrict what services and actions are allowed across member accounts, including limiting the Regions where EC2 instances can be launched. This meets both the billing consolidation and the regional compliance requirement without needing to configure permissions in each account individually.

Want more Billing, Pricing, and Support practice?

Practice this domain

Frequently asked questions

How many questions are on the CLF-C02 exam?

The CLF-C02 exam has up to 65 questions and must be completed in 90 minutes. The passing score is 700/1000.

What types of questions appear on the CLF-C02 exam?

The CLF-C02 exam uses multiple-choice, multiple-select, drag-and-drop, and exhibit-based questions. Exhibit questions show CLI output, network diagrams, or routing tables and ask you to interpret them — exactly the format Courseiva uses.

How are CLF-C02 questions organised by domain?

The exam covers 4 domains: Cloud Concepts, Security and Compliance, Cloud Technology and Services, Billing, Pricing, and Support. Questions are weighted by domain — higher-weight domains appear more on your actual exam.

Are these the actual CLF-C02 exam questions?

No. These are original exam-style practice questions written against the official Amazon Web Services CLF-C02 exam objectives. They are not copied from the real exam. Courseiva focuses on genuine understanding, not memorisation of braindumps.

Ready to practice all 65 CLF-C02 questions?

Courseiva tracks your accuracy per domain and routes you toward weak areas automatically. Free, no account required.