Amazon Web Services · Free Practice Questions · Last reviewed May 2026
24 real exam-style questions organised by domain, each with the correct answer highlighted and a plain-English explanation of why it's right — and why the others are wrong.
A developer has an AWS Lambda function that processes messages from an Amazon SQS queue. The function is configured with a batch size of 10, reserved concurrency of 5, and a timeout of 5 minutes. The SQS queue has a large backlog, and CloudWatch metrics show high throttling (Throttles) for the Lambda function. The function is idempotent and can process up to 100 messages in a single invocation. What is the MOST effective way to increase throughput without increasing the reserved concurrency?
Increase the batch size to 100.
Increasing the batch size allows each invocation to process more messages, reducing the number of invocations and the likelihood of throttling without increasing reserved concurrency.
Increase the reserved concurrency to 10.
Reduce the batch size to 1.
Enable the SQS queue to use long polling.
A developer has an AWS Lambda function that processes messages from an Amazon SQS standard queue. The function is idempotent and currently has a batch size of 10. The developer wants to increase throughput and increases the batch size to 100. After the change, CloudWatch metrics show a significant increase in throttles and the queue backlog is growing. The function's reserved concurrency is set to 10. What is the most effective action to resolve the throttling and improve throughput?
Increase the reserved concurrency of the Lambda function
Higher concurrency allows more invocations to run simultaneously, reducing throttling and enabling the function to consume the larger batch size effectively.
Increase the memory allocation of the Lambda function
Switch the SQS queue to a FIFO queue
Decrease the batch size back to 10
A developer is using AWS X-Ray to trace a serverless application. The application uses an AWS Lambda function to query a DynamoDB table. The trace shows that the DynamoDB subsegment takes a significant portion of the total response time. The developer wants to reduce the DynamoDB query latency. Which service should the developer integrate with the Lambda function to achieve the lowest latency for repeated read queries?
DynamoDB Accelerator (DAX)
Correct. DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to 10x read performance improvement by caching frequently accessed data.
Amazon ElastiCache for Redis
DynamoDB Global Tables
DynamoDB Streams
A developer is building a serverless application using AWS Step Functions to orchestrate multiple AWS Lambda functions. One of the Lambda functions occasionally fails due to a transient error. The developer wants the Step Functions execution to automatically retry the failed task up to three times with exponential backoff. Which configuration should the developer set in the Step Functions state machine definition?
Add a Retry clause in the Lambda function's configuration with a maximum retry count of 3.
Use the Amazon States Language (ASL) Retry field in the Task state definition.
The ASL Retry field allows defining retry policies, including exponential backoff and maximum retry attempts.
Wrap the Lambda function invocation in a custom while loop within the function code.
Use the Amazon States Language Catch field in the Task state to redirect to a retry logic.
A developer is building a serverless application that processes orders. An order is placed and an event is published to an Amazon SNS topic. The SNS topic has multiple subscribers, including an SQS queue for order processing and a Lambda function for sending notifications. The developer wants to ensure that the SQS queue receives all messages reliably, even if the processing Lambda function fails temporarily. Which configuration should the developer set?
Enable a dead-letter queue on the SQS queue
Correct. A DLQ captures messages that cannot be processed after retries, ensuring no messages are lost.
Enable SNS delivery retries for HTTP endpoints
Set the SQS queue's visibility timeout to a value greater than the Lambda function's processing time
Configure the SNS topic to use server-side encryption
A developer is building a REST API using Amazon API Gateway and AWS Lambda. The API receives a large number of requests with duplicate payloads from the same client within a short time window. To reduce Lambda invocations and improve performance, the developer wants to return the previously computed response for identical requests based on a unique client ID in the header. How can the developer achieve this using API Gateway features?
Enable API Gateway caching on the stage and configure the client ID header as a cache key parameter. Set a cache TTL of 5 minutes.
API Gateway caching uses cache key parameters to index responses. By including the client ID header in the cache key, different clients get separate cached responses. The TTL controls how long the response is cached.
Configure a usage plan with a quota and throttle settings to limit requests per client ID.
Use request validation to reject requests that have the same client ID within 5 minutes.
Reduce the Lambda function's batch size to 1 and implement caching logic inside the function using an external cache like ElastiCache.
Want more Development with AWS Services practice?
Practice this domainA developer has an AWS Lambda function that needs to read objects from an S3 bucket in another account. The Lambda function's execution role includes an IAM policy that allows s3:GetObject on the bucket. The bucket owner has added a bucket policy that grants s3:GetObject to the Lambda execution role. However, the Lambda function receives Access Denied errors. The S3 bucket uses SSE-KMS for encryption. What is the most likely cause?
The S3 bucket does not have versioning enabled.
The Lambda function's execution role does not have an explicit allow for s3:GetObject.
The Lambda function is not in the same AWS region as the S3 bucket.
The Lambda function does not have kms:Decrypt permission on the KMS key used by the bucket.
SSE-KMS requires both S3 read permissions and KMS decrypt permission. The bucket policy does not grant KMS permissions; the KMS key policy must allow the Lambda execution role.
A company has multiple AWS accounts managed under AWS Organizations. The security team requires that all Amazon S3 buckets with bucket names containing 'logs' must be encrypted with a specific KMS key (key ID: alias/logs-key) at rest. A developer must enforce this using an SCP (Service Control Policy). Which SCP effect and condition key should be used to deny any PutObject request that does not use the required KMS key?
Deny effect with a Condition: StringNotEquals on s3:x-amz-server-side-encryption-aws-kms-key-id
This SCP will deny any PutObject request that specifies a KMS key that is not the required key. The StringNotEquals condition ensures that if the request does not use the specific key ID, the request is denied. This is the standard way to enforce encryption with a specific KMS key using SCPs.
Deny effect with a Condition: StringEquals on s3:x-amz-server-side-encryption
Allow effect with a Condition: StringEquals on kms:RequestTag/key-id
Deny effect with a Condition: IpAddress on aws:SourceIp
A developer needs to grant a user in another AWS account (Account B) read-only access to objects in an Amazon S3 bucket owned by Account A. The developer has already added a bucket policy that grants s3:GetObject access to the IAM user in Account B. However, the user in Account B still gets Access Denied when trying to read objects. What additional configuration is required?
The user in Account B must have an IAM policy that allows s3:GetObject on the bucket ARN
Cross-account access requires both a bucket policy that grants the user permissions and an IAM policy in the user's account that allows the action. The IAM policy is necessary because the default is to deny all actions.
The bucket must be made public by unchecking 'Block all public access'
The developer must create a new IAM role in Account A and have the user in Account B assume that role
The user in Account B must use the S3 console instead of the AWS CLI
A developer needs to ensure that every cryptographic operation performed on an AWS KMS customer master key (CMK) used for server-side encryption in Amazon S3 is recorded in AWS CloudTrail for auditing. The developer has already enabled CloudTrail and is logging management events. However, the security team wants to see all calls to the KMS Decrypt and Encrypt APIs for this specific key. What must the developer do?
Enable CloudTrail data events for the S3 bucket containing the encrypted objects.
Create an additional CloudTrail trail that logs all management events for the KMS key.
Enable CloudTrail data events for the specific KMS key ARN.
CloudTrail data events for KMS record every call to Decrypt, Encrypt, GenerateDataKey, etc. By specifying the key ARN in the data event selector, only operations on that key are logged, meeting the audit requirement without excessive logging.
Enable CloudTrail Insights events on the existing trail.
A developer is building a mobile application that uses Amazon Cognito for user authentication. After a user signs in, the application needs to access an Amazon DynamoDB table. The developer has set up an identity pool with an authenticated role. The IAM role attached to the authenticated identity has a policy allowing the required DynamoDB actions. However, users report that they cannot perform DynamoDB operations. What is the MOST likely cause of this issue?
The identity pool is not configured to use the authenticated role.
The app is not passing the correct identity ID.
The IAM role's trust policy does not allow Cognito to assume it.
The trust policy of the IAM role must grant the Cognito Identity service principal the sts:AssumeRole permission. Without it, Cognito cannot issue credentials, resulting in denied actions.
The DynamoDB table is encrypted with a different KMS key.
A company uses a customer managed AWS KMS key to encrypt sensitive data stored in DynamoDB. A Lambda function reads from the DynamoDB table and needs to decrypt the data. The Lambda function's execution role has an IAM policy that allows kms:Decrypt on the key. However, access is denied. What must the developer add to the KMS key policy to resolve the issue?
Add a statement granting kms:Decrypt to the Lambda function's execution role.
Correct. The key policy must explicitly allow the IAM role to perform kms:Decrypt.
Add a statement granting kms:Decrypt to the Lambda function's resource-based policy.
Add a statement granting kms:Decrypt to the Lambda service principal.
Add a statement granting kms:Decrypt to the account root user with a condition for the Lambda function.
Want more Security practice?
Practice this domainA developer is using AWS CodeDeploy with a blue/green deployment strategy to update an application running on Amazon ECS with the Fargate launch type. After the new (green) task set is created and traffic is shifted to it, users immediately report errors when trying to write data. The developer discovers that the green task set is connecting to a different database than the blue task set. The database endpoints are configured in the ECS task definition. What is the simplest way to prevent this issue in future deployments?
Modify the blue/green deployment configuration to use the same database endpoint for both task sets by updating the environment variables in the task definition before deployment.
Environment variables in the task definition can be changed without modifying the container image. Set the database endpoint to the same value for both blue and green task sets. This is the simplest solution.
Create two separate Amazon RDS databases and use an Amazon Route 53 weighted routing policy to distribute traffic.
Use an Application Load Balancer (ALB) with stickiness to route each user to the correct task set.
Use AWS CloudFormation to create a new database stack for each deployment and update the task definition dynamically.
A developer is using AWS CodeDeploy with a blue/green deployment on an Amazon ECS service running on Fargate. The developer wants to ensure that the new (green) task set is fully healthy and serving traffic before the old (blue) task set is terminated. The deployment should automatically roll back to the blue task set if the green task set fails health checks. Which configuration should the developer set in the CodeDeploy deployment group?
Deployment type: blue/green, with rollback configuration enabled to trigger automatic rollback and reroute traffic to the original task set
Correct. This configuration ensures that if the new task set fails, CodeDeploy rolls back to the previous version.
Deployment type: blue/green, Deployment configuration: CodeDeployDefault.ECSAllAtOnce
Deployment type: blue/green, Deployment configuration: CodeDeployDefault.ECSLinear10PercentEvery1Minutes
Deployment type: blue/green, with an Application Load Balancer
A developer is deploying a serverless application using the AWS Serverless Application Model (SAM). The application includes an Amazon API Gateway HTTP API and several AWS Lambda functions. The developer wants to implement a canary deployment for the API Gateway stage so that 10% of traffic is shifted to the new version for 30 minutes before the remaining 90% is shifted. Which SAM resource attribute should the developer configure on the API Gateway resource?
AutoPublishAlias
DeploymentPreference
DeploymentPreference with a Canary setting enables gradual traffic shifting for the API Gateway stage.
ProvisionedConcurrencyConfig
EventInvokeConfig
A developer is deploying a multi-container Docker application on Amazon ECS using the Fargate launch type. The application consists of a web server and a background worker. The web server must be scaled independently and must be accessible from the internet via an Application Load Balancer. The worker should not be accessible from the internet. Which ECS configuration should the developer use?
Create one ECS service with both containers in the same task definition, but only expose the web server port.
Create two separate ECS services, each with its own task definition, and place the web server in a public subnet with the worker in a private subnet.
Correct. Separate services allow independent scaling. Placing the web server in a public subnet with an ALB provides internet access, while the worker in a private subnet remains isolated.
Create one ECS service with two tasks, each containing one container.
Create one ECS service with two containers in the same task, and use a service discovery to expose the worker.
A developer is using AWS CodeDeploy to deploy an application to an EC2 Auto Scaling group. The application must remain fully available; only one instance should be taken offline at a time. The developer wants to configure the deployment to update instances one by one, ensuring that the deployment fails fast if any instance fails to deploy. Which deployment configuration should the developer choose?
CodeDeployDefault.AllAtOnce
CodeDeployDefault.HalfAtATime
CodeDeployDefault.OneAtATime
This deploys to one instance at a time, minimizing impact and providing fast failure detection.
CodeDeployDefault.BlueGreen
A developer is deploying an application to Amazon ECS using AWS CodeDeploy with a blue/green deployment strategy. After the new task set is created, it fails health checks. The developer wants to immediately route traffic back to the original task set without waiting for CodeDeploy to complete the rollback process. Which action should the developer take?
Update the ECS service to set the desired count of the new task set to zero.
Use the CodeDeploy console to stop the deployment and then choose to reroute traffic.
Correct. CodeDeploy allows you to stop the deployment and reroute traffic to the original task set.
Delete the new task set.
Update the Application Load Balancer listener rule to forward traffic to the original target group.
Want more Deployment practice?
Practice this domainA developer deployed a new version of an AWS Lambda function that is part of a serverless application. The function uses an Amazon DynamoDB table as a data store. After deployment, the developer notices that the function's latency has increased significantly for some requests. CloudWatch traces show that the increase is due to DynamoDB throttle events. The function is configured with a reserved concurrency of 100 and the DynamoDB table has 5 read capacity units (RCUs) and 5 write capacity units (WCUs). What is the most effective way to reduce the throttling while maintaining application performance?
Decrease the reserved concurrency of the Lambda function to 10
Increase the read and write capacity units on the DynamoDB table
Increasing RCU and WCU directly increases the number of operations the table can handle, reducing throttling.
Enable DynamoDB Accelerator (DAX) for caching reads
Enable auto scaling on the DynamoDB table
A developer is running an AWS Lambda function that is triggered by Amazon S3 events. The function writes processed data to an Amazon DynamoDB table. Over time, the function's execution time has increased significantly. CloudWatch Logs show many DynamoDBProvisionedThroughputExceededException errors. The table is configured with 5 read capacity units (RCUs) and 5 write capacity units (WCUs). The function performs both reads and writes. Which optimization will MOST effectively reduce throttling errors while maintaining performance?
Increase the RCUs and WCUs of the table to 50 each
Switch the DynamoDB table to on-demand capacity mode
On-demand mode automatically scales read and write capacity based on traffic. This eliminates throttling caused by insufficient provisioned capacity and requires no capacity planning.
Implement a DynamoDB Accelerator (DAX) cluster for caching reads
Increase Lambda function memory to 1024 MB
A web application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). During peak hours, users report receiving HTTP 503 (Service Unavailable) errors. The developer checks Amazon CloudWatch metrics and finds that the ALB's request count is high but below the limit, and the target group's healthy host count drops to zero intermittently. The Auto Scaling group for the instances is configured with a minimum of 2, maximum of 10, and a simple scaling policy to add 2 instances when CPU utilization exceeds 70% for 5 consecutive minutes. What is the most likely cause of the 503 errors?
The Auto Scaling group's cooldown period prevents new instances from being added quickly enough during rapid traffic spikes
After a scaling activity, the cooldown period (300s by default) pauses further scaling, causing delays that can result in all instances becoming unhealthy and returning 503 errors.
The ALB's idle timeout is set too low, causing dropped connections
The Auto Scaling group's maximum capacity of 10 is insufficient
The health check grace period is preventing instances from being marked healthy
A developer is troubleshooting an AWS Lambda function that processes large CSV files (up to 1 GB) uploaded to an Amazon S3 bucket. The function uses Python and the pandas library to perform data transformations. Recently, the function started timing out on large files. CloudWatch Logs show that the function's execution time is close to the 15-minute Lambda timeout, and memory utilization peaks at around 80% of the configured 3,008 MB. The function has not been modified in months. Which action will most likely resolve the timeout issue without requiring code changes?
Increase the memory allocation of the Lambda function to the maximum available (10,240 MB)
More memory provides more CPU, speeding up the CPU-intensive pandas processing and reducing execution time below the timeout.
Increase the function timeout to the maximum allowed (900 seconds is already the max)
Use S3 Select to filter columns and rows before invoking the Lambda function
Increase the batch size of the S3 event notification to invoke the function with multiple files
A developer is troubleshooting an AWS Lambda function that processes records from an Amazon Kinesis Data Stream. The function is configured with a batch size of 100 and a parallelization factor of 1. The developer notices that the iterator age is increasing, indicating that the function is not keeping up with the stream. CloudWatch Logs show that the function is not experiencing errors or throttling, but the execution time per invocation is close to the 5-minute timeout. The stream has 10 shards. Which action will most likely increase processing throughput?
Increase the batch size to 500.
Increase the parallelization factor to 10.
Increase the Lambda function memory and CPU allocation.
Increasing memory increases CPU allocation proportionally, which can make each invocation faster. This reduces the per-batch processing time, allowing the function to keep up with the stream and decrease the iterator age.
Split the stream into more shards.
A developer is troubleshooting an AWS Lambda function that is invoked from an Amazon S3 bucket via event notifications. The function processes images and stores metadata in Amazon DynamoDB. The developer notices that some images are being processed multiple times, resulting in duplicate entries in DynamoDB. The S3 event notification is configured to send events to the Lambda function with the 's3:ObjectCreated:*' event type. The function uses the 'uuid' library to generate a unique ID for each image upon processing. What is the most likely cause of the duplicate processing?
S3 event notifications are delivered at least once, and the Lambda function is not idempotent.
S3 can send the same event multiple times. Without idempotency checks (e.g., using the S3 object key as the DynamoDB primary key), each event creates a new item, causing duplicates.
The Lambda function's concurrency is set too high, causing race conditions.
The DynamoDB table does not have a primary key that prevents duplicates.
The S3 bucket is configured with versioning, causing multiple object creation events.
Want more Troubleshooting and Optimization practice?
Practice this domainThe DVA-C02 exam has up to 65 questions and must be completed in 130 minutes. The passing score is 720/1000.
The DVA-C02 exam uses multiple-choice, multiple-select, drag-and-drop, and exhibit-based questions. Exhibit questions show CLI output, network diagrams, or routing tables and ask you to interpret them — exactly the format Courseiva uses.
The exam covers 4 domains: Development with AWS Services, Security, Deployment, Troubleshooting and Optimization. Questions are weighted by domain — higher-weight domains appear more on your actual exam.
No. These are original exam-style practice questions written against the official Amazon Web Services DVA-C02 exam objectives. They are not copied from the real exam. Courseiva focuses on genuine understanding, not memorisation of braindumps.
Courseiva tracks your accuracy per domain and routes you toward weak areas automatically. Free, no account required.