Amazon Web Services · Free Practice Questions · Last reviewed May 2026

SOA-C02 Exam Questions and Answers

36 real exam-style questions organised by domain, each with the correct answer highlighted and a plain-English explanation of why it's right — and why the others are wrong.

65 exam questions
180 min time limit
Pass at 720 / 1000
6 exam domains
1

Domain 1: Monitoring, Logging, and Remediation

All Monitoring, Logging, and Remediation questions

A company uses AWS CloudTrail to log API calls across all regions. The SysOps administrator notices that logs for a specific region are missing from the centralized S3 bucket. What is the most likely cause?

A

The CloudTrail trail is not enabled for that region.

Correct. CloudTrail must be explicitly enabled for each region or a multi-region trail must be used. Missing logs for a specific region strongly suggests the trail is not applied there.

B

The S3 bucket policy denies write access from CloudTrail for that region.

C

CloudTrail log file validation is disabled.

D

The IAM role for CloudTrail does not have permissions to write logs from that region.

Why: CloudTrail can be configured as a single-region trail or a multi-region trail. If logs are missing for a specific region, it indicates that the trail was not applied to that region. A multi-region trail applies to all regions, while a single-region trail only captures events in the region where it was created. Other options like S3 bucket policy, log file validation, or IAM roles could cause delivery failures but are less likely to affect only one region.

A SysOps team needs to monitor application logs in Amazon CloudWatch Logs for specific error codes and automatically invoke an AWS Lambda function for remediation within 5 minutes of an error occurring. Which solution involves the least operational overhead?

A

Create a CloudWatch Logs subscription filter to stream logs directly to an AWS Lambda function.

B

Create a CloudWatch metric filter on the log group, create a CloudWatch alarm on the metric, and configure the alarm to post to an SNS topic that triggers the Lambda function.

Correct. This uses native CloudWatch features with minimal overhead, meeting the 5-minute requirement through alarm evaluation intervals.

C

Use a third-party log aggregation tool that sends webhook notifications to an API Gateway endpoint to invoke the Lambda function.

D

Write a custom script that runs on an EC2 instance to poll CloudWatch Logs every minute and invoke the Lambda function.

Why: CloudWatch Logs can use a metric filter to extract error codes and create a custom metric, then an alarm based on that metric triggers an SNS topic that invokes a Lambda function. This native pattern integrates fully within CloudWatch and requires no custom code for log streaming. A subscription filter streams all log events in real time to Lambda, which would require the function to parse each event and could result in higher costs and more complexity. Polling with a custom script adds overhead. Third-party tools introduce external dependencies.

A company uses an Amazon S3 bucket to store sensitive data. The SysOps administrator needs to be notified within 15 minutes if any object in the bucket becomes publicly accessible. Which solution will meet this requirement with the least operational overhead?

A

Configure an S3 event notification for all object creation events and publish to an Amazon SNS topic that sends an email alert.

B

Use an AWS Config managed rule to detect 's3-bucket-public-read-prohibited' and trigger an SNS notification via Amazon EventBridge.

C

Enable Amazon CloudTrail data events for the S3 bucket and create a CloudWatch Logs metric filter for PutObjectAcl (or PutObject with public ACL) and set an alarm.

D

Configure S3 event notifications for 's3:ObjectCreated:Put' and 's3:ObjectCreated:PutObjectAcl' with a suffix/prefix filter for public grants, sending to an SNS topic.

Correct. This allows real-time notification specifically when objects are created with public ACLs, meeting the requirement with minimal overhead.

Why: Amazon S3 Event Notifications with CloudWatch Events (now part of Amazon EventBridge) can filter for 'PutObject' events with specific conditions and trigger a notification via Amazon SNS. AWS Config rules can evaluate compliance but are not real-time and have a delay. CloudTrail provides logs but requires custom processing. S3 public access blocks prevent public access but do not provide notifications.

A SysOps administrator is troubleshooting an application that runs on AWS Lambda. The application occasionally fails with timeout errors. The administrator needs to identify the exact lines of code that are causing the delays. Which AWS service or feature should be used to gather this information?

A

Enable detailed CloudWatch Logs and search for 'timeout' strings.

B

Use AWS X-Ray to trace the Lambda function and view segment details.

Correct. X-Ray traces function executions and can be instrumented to capture subsegments for each function call, helping identify which lines or api calls are slow.

C

Set a CloudWatch Metric Filter for 'Duration' and create an alarm.

D

Enable AWS CloudTrail data events for the Lambda function.

Why: AWS X-Ray provides end-to-end tracing that can capture function invocations, including subsegments for each line of code (if instrumented) and can show where time is spent. CloudWatch Logs can show logs but not pinpoint specific lines causing delays without additional logging. CloudWatch Metrics show aggregated durations but not per-invocation breakdown. CloudTrail records API calls but not code-level timing.

A SysOps administrator needs to monitor the CPU utilization of an Amazon EC2 instance fleet and send an alert when the average CPU utilization exceeds 80% for 10 consecutive minutes. The administrator also wants to automatically stop the instance if the CPU utilization remains above 90% for 30 minutes to prevent runaway costs. Which combination of AWS services should be used?

A

Amazon CloudWatch alarm + AWS Lambda + AWS Systems Manager Automation

B

Amazon CloudWatch alarm + Amazon Simple Notification Service (SNS) + AWS Lambda

A CloudWatch alarm monitors the CPU metric and publishes to an SNS topic when the threshold is breached. The SNS topic triggers a Lambda function that calls the EC2 StopInstances API to stop the instance. This is a clean, low-overhead solution.

C

Amazon CloudWatch Logs + Amazon EventBridge + AWS Step Functions

D

AWS CloudTrail + Amazon EventBridge + AWS CodePipeline

Why: Amazon CloudWatch alarms can monitor the CPUUtilization metric. When the alarm state is triggered, it can publish to an Amazon SNS topic, which in turn invokes an AWS Lambda function. The Lambda function can then perform the stop action on the instance. This provides a simple and effective automated remediation path.

A SysOps administrator manages an application that runs on Amazon EC2 instances and stores critical data in Amazon Elastic Block Store (EBS) volumes. The administrator needs to monitor the EBS volumes for any performance bottlenecks. The key metric of interest is the average number of I/O operations per second (IOPS) that are waiting to be completed. Which Amazon CloudWatch metric should the administrator examine?

A

VolumeQueueLength

This metric shows the number of pending I/O operations waiting to be serviced. A high value indicates a bottleneck.

B

VolumeReadOps

C

VolumeIdleTime

D

VolumeTotalReadTime

Why: Amazon CloudWatch provides detailed metrics for EBS volumes. The 'VolumeQueueLength' metric measures the number of pending I/O operations (read and write requests) waiting to be completed. A sustained high queue length typically indicates a performance bottleneck where the volume's provisioned IOPS limit is being reached. Other metrics like VolumeReadOps or VolumeWriteOps count completed operations per minute, not pending ones. VolumeIdleTime indicates when no I/O is occurring. VolumeTotalReadTime is a sum of seconds spent reading, not directly indicating a wait queue.

Want more Monitoring, Logging, and Remediation practice?

Practice this domain
2

Domain 2: Reliability and Business Continuity

All Reliability and Business Continuity questions

An application uses an Amazon DynamoDB table with on-demand capacity. The SysOps administrator needs to ensure the table remains available during an AWS regional outage. Which strategy should be used?

A

Enable DynamoDB Accelerator (DAX).

B

Create a read replica in another region.

C

Use DynamoDB global tables.

Correct. Global tables replicate data across multiple AWS Regions and provide automatic failover for high availability.

D

Increase read and write capacity units.

Why: DynamoDB global tables provide multi-region replication with automatic failover, ensuring availability during a regional outage. On-demand scaling handles capacity fluctuations but does not provide cross-region resilience. DAX is a caching layer for read performance. Increasing read/write capacity units does not protect against regional failures. There is no native 'read replica' feature in DynamoDB that spans regions; global tables are the correct approach.

A SysOps administrator is testing the failover of an Amazon RDS for PostgreSQL Multi-AZ DB instance. The application currently writes to the primary instance in us-east-1a. Which action will manually trigger a failover to the standby instance in us-east-1b?

A

Reboot the DB instance and select 'Reboot with failover'.

Correct. This explicitly triggers a failover to the standby instance.

B

Modify the DB instance to Single-AZ and then back to Multi-AZ.

C

Reboot the DB instance without selecting any failover option.

D

Promote the standby instance using the Amazon RDS console.

Why: Rebooting the DB instance with the 'Failover' option selected forces a failover to the standby. Modifying the instance to single-AZ would cause downtime. Rebooting without failover option does not force failover. Manually promoting a read replica is for a different scenario (Multi-AZ vs read replica).

A company runs a web application on Amazon EC2 instances in a single Availability Zone. The SysOps administrator wants to increase the availability of the application so that it can survive an Availability Zone failure. Which action is the most effective?

A

Deploy an additional EC2 instance in the same Availability Zone.

B

Launch EC2 instances in two different Availability Zones and place them behind an Application Load Balancer.

Correct. Spreading instances across AZs with a load balancer ensures continued availability if one AZ becomes unavailable.

C

Enable termination protection on all EC2 instances.

D

Use an Amazon RDS Multi-AZ deployment for the database tier.

Why: To survive an AZ failure, the application must be deployed across multiple Availability Zones. Placing instances in multiple AZs behind an Elastic Load Balancer ensures that if one AZ fails, traffic is routed to healthy instances in other AZs. Simply adding more instances in the same AZ or using the same AZ for the RDS DB does not solve AZ failure. A multi-region deployment provides more resiliency but is more complex and costly; for AZ failure, multiple AZs suffice.

A company runs a stateful web application on a single Amazon EC2 instance with an Elastic IP address. The SysOps administrator needs to increase availability so that if the instance fails, a new instance can be launched quickly with the same configuration and the same IP address. The administrator also needs to ensure data is not lost. Which solution meets these requirements with the least operational overhead?

A

Use an Application Load Balancer with an Auto Scaling group and a launch configuration that includes the Elastic IP

B

Create an AMI from the instance, store data on an Amazon EFS file system, and use an Auto Scaling group with a lifecycle hook to associate the Elastic IP

The AMI provides a pre-configured launch template. EFS provides durable, shared storage for application data. The Auto Scaling group automatically launches a new instance if the current one fails, and the lifecycle hook script associates the Elastic IP to the new instance, ensuring continuity with the same IP.

C

Create a CloudFormation template that launches a new instance and associates the Elastic IP

D

Place the instance in an Auto Scaling group with a minimum of 1 and a maximum of 1, and set the health check to replace unhealthy instances

Why: Using an AMI captures the instance configuration. Amazon EFS provides a shared, persistent file system that survives instance termination. An Auto Scaling group with a lifecycle hook can associate the Elastic IP to the new instance during launch. This combination ensures automated recovery with the same IP and no data loss.

A company runs a critical production database on Amazon RDS for MySQL with a Multi-AZ deployment. The database experiences a primary instance failure. The SysOps administrator needs to understand exactly how the failover process worked and why the application experienced a longer-than-expected downtime. Which AWS service or feature should the administrator use to review detailed events and actions during the failover?

A

AWS Personal Health Dashboard

The Personal Health Dashboard shows relevant events and notifications specific to the customer's RDS Multi-AZ failover, including timing and causes.

B

Amazon RDS Performance Insights

C

Amazon CloudWatch Logs

D

AWS CloudTrail

Why: AWS Health Dashboard (specifically the Personal Health Dashboard) provides detailed events and notifications for AWS services impacting the customer's resources. For an RDS Multi-AZ failover, the dashboard shows specific events like 'RDS Event' for failover start and completion, and potential underlying causes. RDS event subscriptions and CloudTrail log API calls, but Health Dashboard gives a unified view of service health events including automatic failovers. RDS Performance Insights focuses on database performance, not failover history. CloudWatch Logs captures application logs but not infrastructure failover events unless custom logging is set up.

A company runs a stateless web application on Amazon EC2 instances in an Auto Scaling group with a minimum of 2 and maximum of 10 instances. The instances are behind an Application Load Balancer (ALB). The SysOps administrator needs to ensure that the application can survive the failure of an entire AWS Availability Zone (AZ) in the region. Which configuration is necessary?

A

Configure the Auto Scaling group with subnets in at least two Availability Zones and ensure the ALB has subnets in the same AZs.

This distributes instances across multiple AZs, so if one AZ fails, the other AZ continues serving traffic.

B

Increase the Auto Scaling group minimum to 10 instances to absorb the failure.

C

Use larger instance types to handle the load of a failed AZ.

D

Use multiple Application Load Balancers in different AZs.

Why: To survive an AZ failure, the application must be deployed across at least two AZs. The Auto Scaling group should have subnets in multiple AZs so that instances can be launched in different AZs. The ALB must also have subnets in multiple AZs to distribute traffic. The minimum group size should be at least 2 to maintain capacity during a failure. A single AZ will fail completely if that AZ becomes unavailable. Using a larger instance type or multiple ALBs doesn't help if everything is in one AZ. Spot Instances offer cost savings but not AZ failure protection.

Want more Reliability and Business Continuity practice?

Practice this domain
3

Domain 3: Deployment, Provisioning, and Automation

All Deployment, Provisioning, and Automation questions

A team uses AWS CodeDeploy with a deployment configuration of CodeDeployDefault.OneAtATime to deploy a web application to an Auto Scaling group. Instances are behind an Application Load Balancer. The deployment fails with 'The overall deployment failed because too many individual instances failed deployment.' What is the most likely cause?

A

The health check grace period on the Auto Scaling group is too short.

Correct. A short grace period causes instances to be considered unhealthy before the deployment finishes, triggering Auto Scaling to replace them and causing repeated failures.

B

The target group deregistration delay is too long.

C

The CodeDeploy agent is not installed on the instances.

D

The deployment group is configured to skip the ELB health check.

Why: In a OneAtATime deployment, only one instance is updated at a time. If the health check grace period on the Auto Scaling group is too short, the instance may be marked unhealthy before the deployment completes, causing CodeDeploy to stop traffic to it. This can lead to cascading failures if each new instance also fails. The deregistration delay affects how long the ELB waits before stopping traffic to a deregistering instance, but it does not directly cause deployment failures. If the CodeDeploy agent is missing, the first instance would fail immediately. Skipping ELB health checks would avoid this issue entirely.

A development team uses AWS CloudFormation to deploy infrastructure. They want to update a stack but first need to review how the changes will impact existing resources before applying them. Which CloudFormation feature should they use?

A

Change sets

Correct. Change sets provide a preview of the changes that will be made to the stack, enabling review before execution.

B

Stack policies

C

Condition functions

D

Custom resources

Why: Change sets allow you to preview proposed changes to a CloudFormation stack, including a summary of resources that will be added, modified, or deleted. This helps assess the impact before execution. Stack policies are used to protect resources during stack updates, not to preview changes. Condition functions control resource creation based on parameters. Custom resources handle non-AWS resources.

A company uses AWS CodeDeploy to deploy a new version of an application to EC2 instances in an Auto Scaling group behind an Application Load Balancer. The company requires zero downtime during the deployment. Which deployment configuration should be used?

A

CodeDeployDefault.AllAtOnce

B

CodeDeployDefault.OneAtATime

C

CodeDeployDefault.EC2/OnPremises: BlueGreenDeployment

D

Create a blue/green deployment by configuring CodeDeploy to launch new instances and shift traffic after validation.

Correct. Blue/green deployment with CodeDeploy allows routing traffic to new instances and cutting over after validation, ensuring zero downtime.

Why: A blue/green deployment creates a new set of instances (green), registers them with the ALB, and then deregisters the old instances (blue) after they are healthy, ensuring zero downtime. AllAtOnce and OneAtATime cause traffic interruption. Rolling (CodeDeployDefault.OneAtATime) updates instances one by one but the ALB does not drain connections automatically; blue/green is designed for zero downtime.

A company uses AWS CloudFormation to deploy a three-tier web application. The SysOps administrator wants to update a critical parameter, such as the instance type, and ensure that the change is applied without recreating the EC2 instance, if possible. Which CloudFormation stack update feature should be used to achieve this?

A

Change sets

Change sets allow you to preview how changes will affect your resources, including whether an update will cause replacement or in-place modification, giving you control to avoid unnecessary recreation.

B

Stack policy

C

Update with drift detection

D

Directly edit the stack template and use the update stack action

Why: Change sets allow you to preview the effects of changes before applying them, including whether resources will be replaced or updated. This helps ensure that critical parameters like instance type are updated without unintended recreation. The other options do not provide the ability to preview the impact of changes before they are applied.

A company uses AWS CodeDeploy to deploy an application to an Auto Scaling group. The deployment strategy is set to CodeDeployDefault.HalfAtATime. The lifecycle hooks for the Auto Scaling group include a test hook that runs during instance launch. During a recent deployment, the deployment failed because the new instances failed the test hook and were not marked as healthy. The SysOps administrator needs to ensure that failed instances are automatically terminated and replaced with new ones from the Auto Scaling group. Which configuration change should the administrator make?

A

Modify the Auto Scaling group's health check type to ELB

When the health check type is set to ELB, the Auto Scaling group uses the Application Load Balancer's health checks. If the test hook fails, the instance will be marked unhealthy by the ALB, and the Auto Scaling group will terminate and replace it, ensuring only healthy instances remain.

B

Modify the CodeDeploy deployment configuration to use an increased minimum healthy instance count

C

Modify the Auto Scaling group's health check grace period to a lower value

D

Modify the CodeDeploy deployment to ignore the lifecycle hook failure

Why: Setting the Auto Scaling group's health check type to ELB ensures that the group uses the load balancer's health checks to determine instance health. If the test hook fails, the ALB health check will fail, and the Auto Scaling group will terminate the unhealthy instance and launch a new one automatically. This is the simplest way to ensure replacement of instances that fail custom health checks.

A company uses AWS CloudFormation to deploy a web application. The template currently hard-codes the EC2 instance type (e.g., t3.medium). The SysOps administrator wants to make the instance type configurable so that different environments (dev, test, prod) can use different instance types without modifying the template each time. Which CloudFormation feature enables this?

A

Parameters

Parameters allow users to input values when creating or updating a stack, making the template reusable for different environments.

B

Mappings

C

Conditions

D

Outputs

Why: CloudFormation parameters allow you to pass custom values into a template at stack creation or update time. By defining a parameter for the instance type, the administrator can specify different values for each environment without editing the template. Mappings are for static lookups based on keys like region, not for runtime input. Conditions control resource creation based on conditions, not parameterization. Outputs return values from the stack.

Want more Deployment, Provisioning, and Automation practice?

Practice this domain
4

Domain 4: Security and Compliance

All Security and Compliance questions

An organization requires that all Amazon S3 buckets be encrypted at rest by default. A SysOps administrator needs to enforce this using AWS Config. Which AWS Config managed rule should be used?

A

s3-bucket-encryption-enabled

Correct. This rule evaluates whether default encryption is configured on the bucket, meeting the requirement for encryption at rest.

B

s3-bucket-ssl-requests-only

C

s3-bucket-public-read-prohibited

D

s3-bucket-logging-enabled

Why: AWS Config provides managed rules to evaluate resource compliance. The rule 's3-bucket-encryption-enabled' checks whether S3 buckets have default encryption enabled (SSE-S3, SSE-KMS, or SSE-C). Other rules address different concerns: 's3-bucket-ssl-requests-only' enforces encrypted connections, 's3-bucket-public-read-prohibited' prevents public read access, and 's3-bucket-logging-enabled' checks server access logging.

A SysOps administrator needs to ensure that all traffic to an Application Load Balancer (ALB) uses encryption. How can this be enforced?

A

Configure the security group to allow only HTTPS traffic (port 443).

B

Create a listener that redirects HTTP requests (port 80) to HTTPS (port 443).

Correct. An ALB listener rule can redirect HTTP to HTTPS, ensuring clients use encrypted connections.

C

Use AWS WAF to block HTTP requests.

D

Configure the ALB to use a custom SSL certificate.

Why: The most effective way to enforce encrypted connections is to redirect HTTP traffic to HTTPS at the ALB listener level. This ensures clients that connect via HTTP are automatically redirected to HTTPS. While security groups can block port 80, this does not redirect traffic and may cause client errors. AWS WAF can block HTTP requests, but a redirect is simpler and provides a better user experience. Using a custom SSL certificate enables HTTPS but does not prevent access via HTTP if the listener is also open for HTTP.

An organization requires that all Amazon S3 buckets block public access entirely. A SysOps administrator needs to ensure that no bucket can be made public, even accidentally. Which approach enforces this control at the organizational level?

A

Apply an S3 Bucket Policy on each bucket that denies public access.

B

Use an AWS Config managed rule 's3-bucket-public-read-prohibited' to detect and remediate public buckets.

C

Enable S3 Block Public Access at the account level and attach an SCP to deny changes to it.

Correct. Account-level block public access prevents all public access, and an SCP prevents users from disabling it.

D

Create an IAM policy that denies s3:PutBucketPolicy for all users.

Why: Using S3 Block Public Access at the account level is the most effective way to prevent public access across all buckets in an AWS account, and it can be applied as a service control policy (SCP) at the AWS Organizations level to prevent accounts from overriding it. AWS Config rules only detect non-compliance but do not prevent. Bucket policies can be changed by bucket owners. An SCP that denies modifying S3 Block Public Access ensures enforcement across all accounts in the organization.

A company's security team requires that all Amazon EC2 instances in a specific AWS account must have the tag 'Environment' set to either 'Production' or 'Test'. Any instance that is launched without this tag or with an invalid value must be automatically terminated within five minutes. Which combination of AWS services can enforce this requirement with minimal manual intervention?

A

AWS Config with a custom rule and AWS Lambda

A custom AWS Config rule can evaluate EC2 instances when they are created (configuration change trigger) and invoke an AWS Lambda function to terminate instances lacking the required tag or having an invalid value. This provides continuous compliance enforcement.

B

AWS CloudTrail and Amazon CloudWatch Events

C

AWS Service Catalog and AWS Organizations

D

Amazon Inspector and AWS Systems Manager

Why: AWS Config can be used to continuously monitor resources for compliance. A custom AWS Config rule can trigger an AWS Lambda function to evaluate EC2 instances upon creation (via configuration change triggers) and terminate non-compliant instances. This automated approach ensures enforcement without manual intervention. While CloudTrail and EventBridge could also react to instance launches, AWS Config provides ongoing compliance evaluation and is specifically designed for this purpose.

A company has an AWS account that contains multiple Amazon S3 buckets with sensitive data. A SysOps administrator needs to ensure that all S3 buckets in the account have versioning enabled to protect against accidental deletions. The administrator wants to automatically remediate any bucket that is created without versioning enabled. Which solution should be used?

A

Use AWS Config with a managed rule (s3-bucket-versioning-enabled) and an automatic remediation action that uses an AWS Systems Manager Automation document to enable versioning

AWS Config evaluates resources against the rule. When a noncompliant bucket is detected (whether newly created or changed), the automatic remediation using Systems Manager Automation enables versioning on the bucket, ensuring continuous compliance.

B

Use Amazon CloudWatch Events to detect CreateBucket API calls and trigger an AWS Lambda function to enable versioning

C

Use AWS CloudTrail to monitor CreateBucket events and send an alert to the SysOps administrator for manual action

D

Use AWS Service Catalog to enforce versioning on all buckets provisioned through it

Why: AWS Config with the managed rule 's3-bucket-versioning-enabled' continuously evaluates whether S3 buckets have versioning enabled. When a bucket is found noncompliant, an automatic remediation action using an AWS Systems Manager Automation document can enable versioning. This approach provides ongoing compliance monitoring and immediate remediation for new and existing buckets.

An organization requires that all Amazon EC2 instances must be launched only with approved Amazon Machine Images (AMIs) that have been pre-approved by the security team. The SysOps administrator needs to enforce this policy for all current and future instances in the AWS account. Unapproved AMIs should be prevented from launching. Which solution meets these requirements with the least operational overhead?

A

Use AWS Config with the 'approved-amis-by-id' managed rule to evaluate and automatically remediate noncompliant instances.

AWS Config can continuously monitor and automatically remediate instances launched with unapproved AMIs, requiring minimal manual effort.

B

Use an AWS Service Control Policy (SCP) to deny ec2:RunInstances if the AMI ID is not in an approved list.

C

Create an IAM policy that denies ec2:RunInstances for any AMI not on an approved list and attach it to all IAM users and roles.

D

Use AWS Systems Manager Patch Manager to approve AMIs and configure the fleet to use only approved images.

Why: AWS Config provides a managed rule named 'approved-amis-by-id' that can evaluate whether EC2 instances are launched with AMIs from a specified list of approved AMI IDs. The rule can automatically remediate noncompliant instances (e.g., terminate them) using AWS Systems Manager Automation or Lambda functions. AWS Service Control Policies (SCPs) operate at the organizational level and cannot restrict specific AMI IDs because the ec2:RunInstances condition key for AMI IDs (ec2:ImageId) is supported in SCPs, but SCPs are typically used for account-wide boundaries, not for fine-grained AMI approval per environment. However, SCPs can block launch actions if the image ID is not in an allow list, but SCPs require AWS Organizations and careful management. The question emphasizes 'least operational overhead'. Config is easier to set up and maintain at the account level without needing Organizations. IAM policies can leverage the ec2:ImageId condition key but need to be attached to every role/user, which is more overhead. AWS Systems Manager Patch Manager is for patching, not AMI approval.

Want more Security and Compliance practice?

Practice this domain
5

Domain 5: Networking and Content Delivery

All Networking and Content Delivery questions

A company wants to establish a dedicated, low-latency, private connection between its on-premises data center and an AWS VPC. The company does not want to use the public internet. Which AWS service should be used to meet this requirement?

A

AWS Direct Connect

Correct. AWS Direct Connect provides a dedicated private connection between on-premises and AWS, avoiding the public internet.

B

AWS Virtual Private Gateway

C

AWS Transit Gateway

D

VPC Peering

Why: AWS Direct Connect is the service that provides a dedicated private connection from an on-premises data center to AWS, bypassing the public internet for lower latency and increased security. A virtual private gateway is used for VPN connections over the internet. AWS Transit Gateway and VPC peering are used for connecting VPCs, not for on-premises connections.

A company has two VPCs in different AWS regions (us-east-1 and eu-west-1) that are peered. Applications in both VPCs need to communicate using private IP addresses. The ping tests are successful, but the latency is significantly higher than expected. Which change is most likely to improve the latency between the VPCs?

A

Enable DNS resolution for the VPC peering connection.

Correct. When DNS resolution is enabled, instances can resolve private DNS names of instances in the peered VPC, ensuring traffic stays within the AWS backbone and avoids unnecessary hops or public internet routing.

B

Use a Transit Gateway instead of VPC Peering for cross-region connectivity.

C

Increase the MTU on the instances' network interfaces to 9001.

D

Configure ECMP (Equal-Cost Multi-Path) routing on the VPC peering connection.

Why: VPC Peering connections across regions do not support reflexive DNS resolution by default. By enabling 'DNS resolution' from the requester VPC and 'DNS hostname' in the accepter VPC, instances can resolve private DNS names of peered VPC instances, which may lead to more direct routing. However, cross-region VPC peering traffic is routed through AWS backbone, but DNS resolution can affect whether the private IP of the instance is used or if traffic goes via public internet. Increasing MTU might help but the issue is likely DNS resolution. Transit Gateway does not inherently reduce latency across regions. The ECMP (Equal-cost multi-path routing) is not applicable to VPC peering.

A company has deployed a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The application's IP addresses are used by a third-party service to allowlist traffic. The EC2 instances are part of an Auto Scaling group that may scale up and down. The SysOps administrator needs to ensure that the third-party service always has the current IP addresses of the ALB without requiring manual updates. Which solution should the administrator implement?

A

Use AWS Global Accelerator and provide the static IP addresses to the third party

Global Accelerator provides two static IP addresses that serve as a fixed entry point. You can add the ALB as an endpoint, and traffic will be directed to the ALB's current healthy instances, while the static IPs remain unchanged.

B

Use Amazon Route 53 with a simple routing policy pointing to the ALB DNS name

C

Use an Amazon CloudFront distribution with the ALB as the origin and provide the CloudFront IP addresses

D

Use an AWS Network Load Balancer (NLB) with static IP addresses in front of the ALB

Why: AWS Global Accelerator provides two static anycast IP addresses that can be assigned to an Application Load Balancer as an endpoint. These static IP addresses remain constant, and traffic is routed to the ALB through the AWS global network. This eliminates the need for a third party to update allowlists based on changing ALB IP addresses.

A company has an on-premises data center connected to an AWS VPC via an AWS Direct Connect connection. The company's SysOps administrator wants to ensure that traffic from the VPC destined for the on-premises network uses the Direct Connect connection instead of the internet. Which configuration should be used?

A

Add a route in the VPC route table pointing to the on-premises network via a virtual private gateway (VGW)

The VGW is attached to the VPC and is the entry/exit point for Direct Connect. By adding a route with the on-premises destination and the VGW as the target, traffic is forced through the Direct Connect connection.

B

Add a route in the VPC route table pointing to the on-premises network via a NAT gateway

C

Add a route in the VPC route table pointing to the on-premises network via an internet gateway

D

Add a route in the VPC route table pointing to the on-premises network via a VPC peering connection

Why: A virtual private gateway (VGW) is the AWS side of a VPN or Direct Connect connection. To route traffic from the VPC to the on-premises network via Direct Connect, you add a route in the VPC route table that points the on-premises CIDR block to the VGW. This ensures traffic uses the Direct Connect link.

A company has two VPCs in the same AWS region. VPC A hosts a web application, and VPC B hosts a database. The SysOps administrator needs to enable private IP communication between the two VPCs without using the public internet. The administrator wants a simple, low-cost solution that uses the AWS network backbone. Which AWS service should be used?

A

VPC Peering

VPC Peering directly connects two VPCs using private IPs over the AWS network, simple and cost-effective for a pair of VPCs.

B

AWS Transit Gateway

C

AWS Direct Connect

D

AWS Site-to-Site VPN

Why: VPC Peering connects two VPCs privately using the AWS network. It is simple, low-cost (no additional charge, only data transfer costs), and allows routing between private IPs. AWS Transit Gateway is more complex and designed for many VPCs. Direct Connect is for on-premises connectivity. VPN is also for hybrid connectivity and incurs hourly costs.

A company hosts a web application behind an Application Load Balancer (ALB) in us-east-1. Users in Europe report high latency. The SysOps administrator decides to use AWS Global Accelerator to improve performance by directing traffic to the closest edge location. However, the application logs require the original client IP addresses of users. The ALB currently provides the client IP via the X-Forwarded-For header, but the development team warns that Global Accelerator may change the source IP. Which configuration should the administrator choose to meet both performance and logging requirements?

A

Configure Global Accelerator with an endpoint group that points directly to the ALB. The ALB will continue to receive the original client IP in the X-Forwarded-For header.

B

Place a Network Load Balancer (NLB) in front of the ALB, and configure Global Accelerator to point to the NLB. The NLB preserves the client IP, and the ALB can still see it in the X-Forwarded-For header.

Global Accelerator preserves the client source IP when the endpoint is an NLB. The NLB passes traffic to the ALB, which can see the original client IP in the X-Forwarded-For header. This satisfies both performance (using Global Accelerator) and logging requirements.

C

Enable Proxy Protocol v2 on the ALB to ensure client IP addresses are preserved through Global Accelerator.

D

Use Amazon CloudFront instead of Global Accelerator and configure it to forward the client IP in a custom header.

Why: AWS Global Accelerator preserves the original client IP address when the endpoint is a Network Load Balancer (NLB) or an EC2 instance, but not when using an Application Load Balancer. By adding an NLB in front of the ALB, the client IP is preserved through the NLB and can be forwarded to the ALB, which then passes it in the X-Forwarded-For header. Option B is incorrect because Global Accelerator will not preserve the client IP if the endpoint is directly an ALB. Option C is incorrect because enabling Proxy Protocol v2 on the ALB does not help; Proxy Protocol is used with NLBs. Option D is incorrect because CloudFront is a CDN and does not necessarily preserve client IP in the same manner, and it adds another layer of complexity.

Want more Networking and Content Delivery practice?

Practice this domain
6

Domain 6: Cost and Performance Optimization

All Cost and Performance Optimization questions

A SysOps administrator manages an Amazon RDS for MySQL instance that experiences high CPU utilization during business hours. The application is read-heavy. Which action will most effectively improve performance and reduce cost?

A

Enable Multi-AZ deployment.

B

Scale up the instance size to a larger instance class.

C

Add a read replica.

Correct. Read replicas handle read traffic, reducing load on the primary instance and improving performance cost-effectively.

D

Enable automated backups.

Why: For read-heavy workloads, adding read replicas offloads read queries from the primary instance, reducing CPU utilization and improving performance. Read replicas are relatively low cost compared to scaling up the instance. Multi-AZ improves availability, not performance. Automated backups do not affect CPU usage. Scaling up increases cost and may only temporarily help.

A SysOps administrator manages a web application running on Amazon EC2 instances that run 24/7 for the next 12 months. The workload is steady and predictable. Which EC2 purchasing option provides the highest cost savings for this use case?

A

Standard Reserved Instances

Correct. Standard Reserved Instances offer the highest discount for a steady, predictable 24/7 workload with a 1-year or 3-year term.

B

Spot Instances

C

On-Demand Instances

D

Savings Plans (Compute)

Why: Standard Reserved Instances (RIs) provide a significant discount over On-Demand pricing for a steady-state workload committed to a 1-year or 3-year term. Spot Instances are suitable for fault-tolerant or interruptible workloads, not for 24/7 steady state. On-Demand is flexible but more expensive. Savings Plans offer flexibility across EC2, Fargate, and Lambda but Standard RIs typically provide the highest discount for a specific EC2 instance family commitment.

A company runs a large number of EC2 instances across multiple accounts and regions. The finance team needs to track costs per project and department. Each EC2 instance must be tagged with a ProjectID and Department tag. A SysOps administrator needs to ensure that all newly launched EC2 instances are tagged automatically before they can be used, and that existing untagged instances are retroactively tagged. The tags must be propagated to cost reports in AWS Cost Explorer. Which combination of steps will achieve this with the least operational overhead?

A

Use AWS Config with auto-remediation to tag new instances, and activate the tags as cost allocation tags. For existing instances, run the Tag Editor with a CSV import.

Correct. AWS Config auto-remediation tags non-compliant new instances; Tag Editor bulk-tags existing instances; activation in Billing console propagates tags to cost reports.

B

Create an AWS Lambda function that tags instances at launch via CloudTrail events, and use AWS Budgets to enforce tagging.

C

Use AWS Cost Categories to automatically group costs based on resource tags.

D

Ensure all AMIs used have tags that propagate to instances, and enable cost allocation tags.

Why: AWS Config Managed Rule 'required-tags' can automatically remediate non-compliant resources via AWS Systems Manager Automation documents (e.g., AWSConfigRemediation-RequiredTags). For already running instances, a one-time automated process (e.g., Tag Editor or a script) can be used. The tag propagation to cost reports requires activating the tags in the Billing and Cost Management console via 'Cost Allocation Tags'. AWS Budgets do not enforce tagging; Cost Categories categorize costs but do not enforce tagging; Lambda per-region is more overhead.

A company runs a batch processing application on Amazon EC2 that runs for 2 hours every night. The workload can tolerate interruptions. Which EC2 purchasing option provides the lowest cost for this use case?

A

On-Demand Instances

B

Reserved Instances

C

Spot Instances

Spot Instances allow you to use spare EC2 capacity at up to 90% discount compared to On-Demand. The workload is interruption-tolerant and fits the nightly batch window well, making this the most cost-effective option.

D

Dedicated Hosts

Why: Spot Instances are designed for workloads that are fault-tolerant, flexible, and can handle interruptions. They offer significant cost savings compared to On-Demand and Reserved Instances, making them the ideal choice for a short, interruptible batch job.

A company runs a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The application reads data from an Amazon RDS for MySQL database. During peak hours, the database CPU utilization is consistently high, and the application experiences increased latency. The SysOps administrator observes that 90% of database queries are read-only. Which combination of actions will both improve performance and optimize costs?

A

Enable Multi-AZ for the RDS instance and scale up the instance size

B

Implement a read replica for the RDS instance and modify the application to route read queries to the read replica

Read replicas can handle read traffic, reducing load on the primary instance. This improves performance and, by avoiding unnecessary scale-up, can be more cost-effective. Only the primary instance needs to be sized for writes.

C

Enable Amazon RDS Performance Insights and increase the storage allocation

D

Implement Amazon ElastiCache for Memcached in front of the database and migrate read-heavy queries to cache

Why: Implementing an RDS read replica offloads read queries from the primary database instance. By modifying the application to direct read queries to the read replica, the primary instance's CPU utilization decreases, reducing latency. This can also delay or eliminate the need to scale up the instance size, thus optimizing costs as read replicas are charged separately but can be of a smaller instance type.

A company runs a web application on Amazon EC2 instances that are part of an Auto Scaling group. The application's traffic is predictable with regular peaks during business hours and low traffic at night. The SysOps administrator wants to optimize costs while ensuring that performance meets demand. The administrator also needs to minimize manual intervention. Which scaling policy should be used?

A

Scheduled scaling

Scheduled scaling adjusts capacity at predefined times, matching predictable patterns with no manual effort after setup.

B

Target tracking scaling

C

Simple scaling

D

Manual scaling

Why: A scheduled scaling policy is ideal for predictable traffic patterns because it allows the administrator to plan capacity changes at specific times, such as increasing instances before business hours and decreasing after. This minimizes manual intervention and is cost-effective because it aligns capacity with demand. Target tracking is reactive and adjusts based on real-time metrics, which can cause lag. Simple scaling is also reactive. Manual scaling requires ongoing administrator action.

Want more Cost and Performance Optimization practice?

Practice this domain

Frequently asked questions

How many questions are on the SOA-C02 exam?

The SOA-C02 exam has up to 65 questions and must be completed in 180 minutes. The passing score is 720/1000.

What types of questions appear on the SOA-C02 exam?

The SOA-C02 exam uses multiple-choice, multiple-select, drag-and-drop, and exhibit-based questions. Exhibit questions show CLI output, network diagrams, or routing tables and ask you to interpret them — exactly the format Courseiva uses.

How are SOA-C02 questions organised by domain?

The exam covers 6 domains: Monitoring, Logging, and Remediation, Reliability and Business Continuity, Deployment, Provisioning, and Automation, Security and Compliance, Networking and Content Delivery, Cost and Performance Optimization. Questions are weighted by domain — higher-weight domains appear more on your actual exam.

Are these the actual SOA-C02 exam questions?

No. These are original exam-style practice questions written against the official Amazon Web Services SOA-C02 exam objectives. They are not copied from the real exam. Courseiva focuses on genuine understanding, not memorisation of braindumps.

Ready to practice all 65 SOA-C02 questions?

Courseiva tracks your accuracy per domain and routes you toward weak areas automatically. Free, no account required.