Amazon Web Services · Free Practice Questions · Last reviewed May 2026

DVA-C02 Exam Questions and Answers

24 real exam-style questions organised by domain, each with the correct answer highlighted and a plain-English explanation of why it's right — and why the others are wrong.

65 exam questions
130 min time limit
Pass at 720 / 1000
4 exam domains
1

Domain 1: Development with AWS Services

All Development with AWS Services questions

A developer has an AWS Lambda function that processes messages from an Amazon SQS queue. The function is configured with a batch size of 10, reserved concurrency of 5, and a timeout of 5 minutes. The SQS queue has a large backlog, and CloudWatch metrics show high throttling (Throttles) for the Lambda function. The function is idempotent and can process up to 100 messages in a single invocation. What is the MOST effective way to increase throughput without increasing the reserved concurrency?

A

Increase the batch size to 100.

Increasing the batch size allows each invocation to process more messages, reducing the number of invocations and the likelihood of throttling without increasing reserved concurrency.

B

Increase the reserved concurrency to 10.

C

Reduce the batch size to 1.

D

Enable the SQS queue to use long polling.

Why: The Lambda function is throttled because with reserved concurrency of 5, at most 5 concurrent executions can happen. Each invocation processes 10 messages (batch size). To increase throughput without increasing reserved concurrency, the developer can increase the batch size. By increasing the batch size to 100, each invocation will process more messages, thus reducing the number of invocations needed and lowering the chance of throttling. The function's timeout of 5 minutes is sufficient to handle 100 messages. Additionally, increasing the maximum number of batches per invocation (via partial batch response) is not directly applicable here. Increasing the number of concurrent executions is not allowed. Reducing the reserved concurrency would reduce throughput. Therefore, increasing the batch size is the best option.

A developer has an AWS Lambda function that processes messages from an Amazon SQS standard queue. The function is idempotent and currently has a batch size of 10. The developer wants to increase throughput and increases the batch size to 100. After the change, CloudWatch metrics show a significant increase in throttles and the queue backlog is growing. The function's reserved concurrency is set to 10. What is the most effective action to resolve the throttling and improve throughput?

A

Increase the reserved concurrency of the Lambda function

Higher concurrency allows more invocations to run simultaneously, reducing throttling and enabling the function to consume the larger batch size effectively.

B

Increase the memory allocation of the Lambda function

C

Switch the SQS queue to a FIFO queue

D

Decrease the batch size back to 10

Why: Increasing the batch size causes each invocation to process more messages, but if the function's reserved concurrency is too low, many invocations may be throttled. By increasing reserved concurrency, more invocations can run concurrently, allowing the function to consume messages from the queue faster and reduce the backlog. The batch size increase alone may not help if the function cannot run enough concurrent executions. Increasing memory can improve per-message processing speed but is less effective here since the bottleneck is concurrency. Using a FIFO queue does not address standard queue throughput. Decreasing the batch size would reduce throttling but not improve throughput.

A developer is using AWS X-Ray to trace a serverless application. The application uses an AWS Lambda function to query a DynamoDB table. The trace shows that the DynamoDB subsegment takes a significant portion of the total response time. The developer wants to reduce the DynamoDB query latency. Which service should the developer integrate with the Lambda function to achieve the lowest latency for repeated read queries?

A

DynamoDB Accelerator (DAX)

Correct. DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to 10x read performance improvement by caching frequently accessed data.

B

Amazon ElastiCache for Redis

C

DynamoDB Global Tables

D

DynamoDB Streams

Why: DynamoDB Accelerator (DAX) is an in-memory cache that provides microsecond latency for repeated read queries. It is ideal for applications that need to reduce DynamoDB response times, especially when the same items are frequently read.

A developer is building a serverless application using AWS Step Functions to orchestrate multiple AWS Lambda functions. One of the Lambda functions occasionally fails due to a transient error. The developer wants the Step Functions execution to automatically retry the failed task up to three times with exponential backoff. Which configuration should the developer set in the Step Functions state machine definition?

A

Add a Retry clause in the Lambda function's configuration with a maximum retry count of 3.

B

Use the Amazon States Language (ASL) Retry field in the Task state definition.

The ASL Retry field allows defining retry policies, including exponential backoff and maximum retry attempts.

C

Wrap the Lambda function invocation in a custom while loop within the function code.

D

Use the Amazon States Language Catch field in the Task state to redirect to a retry logic.

Why: Step Functions uses the Amazon States Language (ASL) to define state machines. The Retry field in a Task state allows specifying error handling, including maximum retries and backoff rules. The Lambda function itself cannot configure retries; retry logic must be in the state machine. The Catch field handles errors after retries are exhausted or if not specified.

A developer is building a serverless application that processes orders. An order is placed and an event is published to an Amazon SNS topic. The SNS topic has multiple subscribers, including an SQS queue for order processing and a Lambda function for sending notifications. The developer wants to ensure that the SQS queue receives all messages reliably, even if the processing Lambda function fails temporarily. Which configuration should the developer set?

A

Enable a dead-letter queue on the SQS queue

Correct. A DLQ captures messages that cannot be processed after retries, ensuring no messages are lost.

B

Enable SNS delivery retries for HTTP endpoints

C

Set the SQS queue's visibility timeout to a value greater than the Lambda function's processing time

D

Configure the SNS topic to use server-side encryption

Why: To handle messages that cannot be processed after a maximum number of retries, an SQS dead-letter queue (DLQ) should be configured on the SQS queue. Messages that are not processed successfully are moved to the DLQ for later analysis. SNS delivery retries are for HTTP/S endpoints, not for SQS. Visibility timeout controls how long a message is invisible after being picked up. Server-side encryption protects data at rest but does not affect reliability.

A developer is building a REST API using Amazon API Gateway and AWS Lambda. The API receives a large number of requests with duplicate payloads from the same client within a short time window. To reduce Lambda invocations and improve performance, the developer wants to return the previously computed response for identical requests based on a unique client ID in the header. How can the developer achieve this using API Gateway features?

A

Enable API Gateway caching on the stage and configure the client ID header as a cache key parameter. Set a cache TTL of 5 minutes.

API Gateway caching uses cache key parameters to index responses. By including the client ID header in the cache key, different clients get separate cached responses. The TTL controls how long the response is cached.

B

Configure a usage plan with a quota and throttle settings to limit requests per client ID.

C

Use request validation to reject requests that have the same client ID within 5 minutes.

D

Reduce the Lambda function's batch size to 1 and implement caching logic inside the function using an external cache like ElastiCache.

Why: API Gateway supports response caching based on cache key parameters. By enabling caching on the stage and configuring the client ID header as a cache key parameter, API Gateway will cache responses per client ID. This avoids invoking the Lambda function for duplicate requests within the cache TTL. Request validation and usage plans do not provide per-client caching. Although reducing batch size might help in other scenarios, it is not relevant to caching.

Want more Development with AWS Services practice?

Practice this domain
2

Domain 2: Security

All Security questions

A developer has an AWS Lambda function that needs to read objects from an S3 bucket in another account. The Lambda function's execution role includes an IAM policy that allows s3:GetObject on the bucket. The bucket owner has added a bucket policy that grants s3:GetObject to the Lambda execution role. However, the Lambda function receives Access Denied errors. The S3 bucket uses SSE-KMS for encryption. What is the most likely cause?

A

The S3 bucket does not have versioning enabled.

B

The Lambda function's execution role does not have an explicit allow for s3:GetObject.

C

The Lambda function is not in the same AWS region as the S3 bucket.

D

The Lambda function does not have kms:Decrypt permission on the KMS key used by the bucket.

SSE-KMS requires both S3 read permissions and KMS decrypt permission. The bucket policy does not grant KMS permissions; the KMS key policy must allow the Lambda execution role.

Why: When SSE-KMS is used, the calling principal must have kms:Decrypt permission on the KMS key. The bucket policy cannot grant KMS permissions; the key policy must allow the Lambda execution role. The IAM policy for the role already allows s3:GetObject, so that is not the issue. Versioning and region are not relevant to access denial in this scenario.

A company has multiple AWS accounts managed under AWS Organizations. The security team requires that all Amazon S3 buckets with bucket names containing 'logs' must be encrypted with a specific KMS key (key ID: alias/logs-key) at rest. A developer must enforce this using an SCP (Service Control Policy). Which SCP effect and condition key should be used to deny any PutObject request that does not use the required KMS key?

A

Deny effect with a Condition: StringNotEquals on s3:x-amz-server-side-encryption-aws-kms-key-id

This SCP will deny any PutObject request that specifies a KMS key that is not the required key. The StringNotEquals condition ensures that if the request does not use the specific key ID, the request is denied. This is the standard way to enforce encryption with a specific KMS key using SCPs.

B

Deny effect with a Condition: StringEquals on s3:x-amz-server-side-encryption

C

Allow effect with a Condition: StringEquals on kms:RequestTag/key-id

D

Deny effect with a Condition: IpAddress on aws:SourceIp

Why: Service Control Policies (SCPs) can enforce conditions on API calls. To deny PutObject requests that do not use the specified KMS key, the SCP should have a Deny effect with a condition that matches requests where the encryption key is not the required key. The condition key s3:x-amz-server-side-encryption-aws-kms-key-id is used to check the KMS key ID. The condition should be a StringNotEquals to deny if the key ID does not match.

A developer needs to grant a user in another AWS account (Account B) read-only access to objects in an Amazon S3 bucket owned by Account A. The developer has already added a bucket policy that grants s3:GetObject access to the IAM user in Account B. However, the user in Account B still gets Access Denied when trying to read objects. What additional configuration is required?

A

The user in Account B must have an IAM policy that allows s3:GetObject on the bucket ARN

Cross-account access requires both a bucket policy that grants the user permissions and an IAM policy in the user's account that allows the action. The IAM policy is necessary because the default is to deny all actions.

B

The bucket must be made public by unchecking 'Block all public access'

C

The developer must create a new IAM role in Account A and have the user in Account B assume that role

D

The user in Account B must use the S3 console instead of the AWS CLI

Why: For cross-account access to S3, the bucket policy (resource-based policy) grants access to the principal in the other account. However, the user's own account must also explicitly permit the action via an IAM policy attached to the user. Without that, the user is not allowed to make the request even if the bucket allows it.

A developer needs to ensure that every cryptographic operation performed on an AWS KMS customer master key (CMK) used for server-side encryption in Amazon S3 is recorded in AWS CloudTrail for auditing. The developer has already enabled CloudTrail and is logging management events. However, the security team wants to see all calls to the KMS Decrypt and Encrypt APIs for this specific key. What must the developer do?

A

Enable CloudTrail data events for the S3 bucket containing the encrypted objects.

B

Create an additional CloudTrail trail that logs all management events for the KMS key.

C

Enable CloudTrail data events for the specific KMS key ARN.

CloudTrail data events for KMS record every call to Decrypt, Encrypt, GenerateDataKey, etc. By specifying the key ARN in the data event selector, only operations on that key are logged, meeting the audit requirement without excessive logging.

D

Enable CloudTrail Insights events on the existing trail.

Why: By default, CloudTrail logs management events for KMS, but to capture all data plane operations (like Decrypt and Encrypt), you must enable CloudTrail data events for KMS. This is done by adding a data event for the specific key ARN in the CloudTrail trail configuration. Management events only log KMS key management operations (e.g., CreateKey, ScheduleKeyDeletion).

A developer is building a mobile application that uses Amazon Cognito for user authentication. After a user signs in, the application needs to access an Amazon DynamoDB table. The developer has set up an identity pool with an authenticated role. The IAM role attached to the authenticated identity has a policy allowing the required DynamoDB actions. However, users report that they cannot perform DynamoDB operations. What is the MOST likely cause of this issue?

A

The identity pool is not configured to use the authenticated role.

B

The app is not passing the correct identity ID.

C

The IAM role's trust policy does not allow Cognito to assume it.

The trust policy of the IAM role must grant the Cognito Identity service principal the sts:AssumeRole permission. Without it, Cognito cannot issue credentials, resulting in denied actions.

D

The DynamoDB table is encrypted with a different KMS key.

Why: For Cognito Identity Pools to grant permissions to authenticated users, the IAM role's trust policy must allow the Cognito Identity service to assume it. Without the proper trust relationship, Cognito cannot obtain temporary credentials for the user, even if the permissions policy is correct. The identity ID passed by the app is typically handled automatically by the SDK. If the identity pool is configured, the role is used by default. KMS key encryption would cause a different error (AccessDenied) but is less common.

A company uses a customer managed AWS KMS key to encrypt sensitive data stored in DynamoDB. A Lambda function reads from the DynamoDB table and needs to decrypt the data. The Lambda function's execution role has an IAM policy that allows kms:Decrypt on the key. However, access is denied. What must the developer add to the KMS key policy to resolve the issue?

A

Add a statement granting kms:Decrypt to the Lambda function's execution role.

Correct. The key policy must explicitly allow the IAM role to perform kms:Decrypt.

B

Add a statement granting kms:Decrypt to the Lambda function's resource-based policy.

C

Add a statement granting kms:Decrypt to the Lambda service principal.

D

Add a statement granting kms:Decrypt to the account root user with a condition for the Lambda function.

Why: AWS KMS key policies are resource-based policies that control who can use the key. Even if an IAM policy grants a principal permission, the key policy must explicitly allow that principal to use the key (unless the key policy grants access to the account root, which then delegates to IAM). By default, KMS key policies do not grant access to any IAM roles except the root user. Therefore, to allow the Lambda execution role to decrypt, the key policy must include a statement granting the role the necessary permissions.

Want more Security practice?

Practice this domain
3

Domain 3: Deployment

All Deployment questions

A developer is using AWS CodeDeploy with a blue/green deployment strategy to update an application running on Amazon ECS with the Fargate launch type. After the new (green) task set is created and traffic is shifted to it, users immediately report errors when trying to write data. The developer discovers that the green task set is connecting to a different database than the blue task set. The database endpoints are configured in the ECS task definition. What is the simplest way to prevent this issue in future deployments?

A

Modify the blue/green deployment configuration to use the same database endpoint for both task sets by updating the environment variables in the task definition before deployment.

Environment variables in the task definition can be changed without modifying the container image. Set the database endpoint to the same value for both blue and green task sets. This is the simplest solution.

B

Create two separate Amazon RDS databases and use an Amazon Route 53 weighted routing policy to distribute traffic.

C

Use an Application Load Balancer (ALB) with stickiness to route each user to the correct task set.

D

Use AWS CloudFormation to create a new database stack for each deployment and update the task definition dynamically.

Why: The simplest way to ensure both blue and green task sets connect to the same database is to use environment variables or a configuration file to define the database endpoint, and set it to the same value for both deployments. Alternatively, use Amazon DynamoDB or Amazon RDS with a cluster endpoint. Using the same task definition with environment variables pointing to the same database ensures consistency across deployments.

A developer is using AWS CodeDeploy with a blue/green deployment on an Amazon ECS service running on Fargate. The developer wants to ensure that the new (green) task set is fully healthy and serving traffic before the old (blue) task set is terminated. The deployment should automatically roll back to the blue task set if the green task set fails health checks. Which configuration should the developer set in the CodeDeploy deployment group?

A

Deployment type: blue/green, with rollback configuration enabled to trigger automatic rollback and reroute traffic to the original task set

Correct. This configuration ensures that if the new task set fails, CodeDeploy rolls back to the previous version.

B

Deployment type: blue/green, Deployment configuration: CodeDeployDefault.ECSAllAtOnce

C

Deployment type: blue/green, Deployment configuration: CodeDeployDefault.ECSLinear10PercentEvery1Minutes

D

Deployment type: blue/green, with an Application Load Balancer

Why: To achieve automatic rollback on health check failures, the deployment group must be configured with a rollback configuration that triggers a deployment rollback and reroutes traffic to the original (blue) task set. The deployment configuration (e.g., AllAtOnce or Linear) controls the traffic shifting pattern, but without rollback, a failed green will not automatically revert. Using an ALB is standard but does not by itself provide rollback behavior.

A developer is deploying a serverless application using the AWS Serverless Application Model (SAM). The application includes an Amazon API Gateway HTTP API and several AWS Lambda functions. The developer wants to implement a canary deployment for the API Gateway stage so that 10% of traffic is shifted to the new version for 30 minutes before the remaining 90% is shifted. Which SAM resource attribute should the developer configure on the API Gateway resource?

A

AutoPublishAlias

B

DeploymentPreference

DeploymentPreference with a Canary setting enables gradual traffic shifting for the API Gateway stage.

C

ProvisionedConcurrencyConfig

D

EventInvokeConfig

Why: In AWS SAM, the DeploymentPreference attribute on the AWS::Serverless::Api resource allows configuring canary or linear traffic shifting for API Gateway. The CanarySetting property specifies the percentage and interval. AutoPublishAlias is for Lambda function versioning, not API stages. ProvisionedConcurrencyConfig is for Lambda concurrency. EventInvokeConfig is for asynchronous invocation behavior. The DeletionPolicy is not related to traffic shifting.

A developer is deploying a multi-container Docker application on Amazon ECS using the Fargate launch type. The application consists of a web server and a background worker. The web server must be scaled independently and must be accessible from the internet via an Application Load Balancer. The worker should not be accessible from the internet. Which ECS configuration should the developer use?

A

Create one ECS service with both containers in the same task definition, but only expose the web server port.

B

Create two separate ECS services, each with its own task definition, and place the web server in a public subnet with the worker in a private subnet.

Correct. Separate services allow independent scaling. Placing the web server in a public subnet with an ALB provides internet access, while the worker in a private subnet remains isolated.

C

Create one ECS service with two tasks, each containing one container.

D

Create one ECS service with two containers in the same task, and use a service discovery to expose the worker.

Why: To scale independently and control internet access, the best practice is to create separate ECS services with their own task definitions. The web server service should be placed in a public subnet with an ALB, while the worker service should be placed in a private subnet without public access. This provides independent scaling and security.

A developer is using AWS CodeDeploy to deploy an application to an EC2 Auto Scaling group. The application must remain fully available; only one instance should be taken offline at a time. The developer wants to configure the deployment to update instances one by one, ensuring that the deployment fails fast if any instance fails to deploy. Which deployment configuration should the developer choose?

A

CodeDeployDefault.AllAtOnce

B

CodeDeployDefault.HalfAtATime

C

CodeDeployDefault.OneAtATime

This deploys to one instance at a time, minimizing impact and providing fast failure detection.

D

CodeDeployDefault.BlueGreen

Why: CodeDeployDefault.OneAtATime instructs CodeDeploy to deploy to one instance at a time. This minimizes the number of unavailable instances and allows early detection of failures. If an instance fails, the deployment stops immediately (fail fast). CodeDeployDefault.HalfAtATime would take half the instances offline simultaneously, which could affect availability. CodeDeployDefault.AllAtOnce would take all instances offline at once, causing downtime. Blue/green is a different deployment type that creates a new environment, not a config for in-place updates.

A developer is deploying an application to Amazon ECS using AWS CodeDeploy with a blue/green deployment strategy. After the new task set is created, it fails health checks. The developer wants to immediately route traffic back to the original task set without waiting for CodeDeploy to complete the rollback process. Which action should the developer take?

A

Update the ECS service to set the desired count of the new task set to zero.

B

Use the CodeDeploy console to stop the deployment and then choose to reroute traffic.

Correct. CodeDeploy allows you to stop the deployment and reroute traffic to the original task set.

C

Delete the new task set.

D

Update the Application Load Balancer listener rule to forward traffic to the original target group.

Why: CodeDeploy provides a 'Reroute traffic to original' option when a deployment is in progress or has failed. This immediately redirects traffic back to the original task set, effectively performing a manual rollback. This is the recommended way to revert traffic while maintaining CodeDeploy's deployment state and lifecycle hooks.

Want more Deployment practice?

Practice this domain
4

Domain 4: Troubleshooting and Optimization

All Troubleshooting and Optimization questions

A developer deployed a new version of an AWS Lambda function that is part of a serverless application. The function uses an Amazon DynamoDB table as a data store. After deployment, the developer notices that the function's latency has increased significantly for some requests. CloudWatch traces show that the increase is due to DynamoDB throttle events. The function is configured with a reserved concurrency of 100 and the DynamoDB table has 5 read capacity units (RCUs) and 5 write capacity units (WCUs). What is the most effective way to reduce the throttling while maintaining application performance?

A

Decrease the reserved concurrency of the Lambda function to 10

B

Increase the read and write capacity units on the DynamoDB table

Increasing RCU and WCU directly increases the number of operations the table can handle, reducing throttling.

C

Enable DynamoDB Accelerator (DAX) for caching reads

D

Enable auto scaling on the DynamoDB table

Why: The likely cause is that the function's reserved concurrency allows many concurrent invocations, each trying to read/write to the table, exceeding the low provisioned capacity (5 RCU/WCU). Increasing the read and write capacity of the DynamoDB table (Option B) directly addresses the throttling by allowing more operations per second. Option A (Decrease reserved concurrency) would reduce throughput but may cause request queuing. Option C (Add a DynamoDB Accelerator (DAX) cluster) speeds up reads but does not solve write throttling, and DAX is for caching. Option D (Enable auto scaling) is good but immediate need is to handle the current load; also auto scaling requires a few minutes to adjust, and with only 5 units it may not scale fast enough.

A developer is running an AWS Lambda function that is triggered by Amazon S3 events. The function writes processed data to an Amazon DynamoDB table. Over time, the function's execution time has increased significantly. CloudWatch Logs show many DynamoDBProvisionedThroughputExceededException errors. The table is configured with 5 read capacity units (RCUs) and 5 write capacity units (WCUs). The function performs both reads and writes. Which optimization will MOST effectively reduce throttling errors while maintaining performance?

A

Increase the RCUs and WCUs of the table to 50 each

B

Switch the DynamoDB table to on-demand capacity mode

On-demand mode automatically scales read and write capacity based on traffic. This eliminates throttling caused by insufficient provisioned capacity and requires no capacity planning.

C

Implement a DynamoDB Accelerator (DAX) cluster for caching reads

D

Increase Lambda function memory to 1024 MB

Why: DynamoDB throttling occurs when request capacity exceeds the provisioned throughput. Increasing capacity units or switching to on-demand mode will reduce throttling. Additionally, adding a retry mechanism with exponential backoff in the Lambda function can handle occasional throttling gracefully. Among the options, switching to on-demand capacity is the most effective as it automatically scales to meet demand.

A web application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). During peak hours, users report receiving HTTP 503 (Service Unavailable) errors. The developer checks Amazon CloudWatch metrics and finds that the ALB's request count is high but below the limit, and the target group's healthy host count drops to zero intermittently. The Auto Scaling group for the instances is configured with a minimum of 2, maximum of 10, and a simple scaling policy to add 2 instances when CPU utilization exceeds 70% for 5 consecutive minutes. What is the most likely cause of the 503 errors?

A

The Auto Scaling group's cooldown period prevents new instances from being added quickly enough during rapid traffic spikes

After a scaling activity, the cooldown period (300s by default) pauses further scaling, causing delays that can result in all instances becoming unhealthy and returning 503 errors.

B

The ALB's idle timeout is set too low, causing dropped connections

C

The Auto Scaling group's maximum capacity of 10 is insufficient

D

The health check grace period is preventing instances from being marked healthy

Why: A simple scaling policy has a cooldown period (default 300 seconds) after a scaling activity. If traffic spikes rapidly and the CPU exceeds 70% for 5 minutes, a scaling activity adds 2 instances, but then cooldown prevents further scaling. If the spike continues and the new instances are quickly overwhelmed, the ALB may mark instances as unhealthy if they fail health checks, leading to zero healthy hosts. The issue is that the scaling policy is too slow and the cooldown prevents rapid scaling to meet demand. Target tracking scaling policies could react faster and are recommended. Insufficient max instances (10) is less likely because the max is high. Stickiness or health check settings could contribute but are not the root cause given the intermittent nature tied to scaling cooldown.

A developer is troubleshooting an AWS Lambda function that processes large CSV files (up to 1 GB) uploaded to an Amazon S3 bucket. The function uses Python and the pandas library to perform data transformations. Recently, the function started timing out on large files. CloudWatch Logs show that the function's execution time is close to the 15-minute Lambda timeout, and memory utilization peaks at around 80% of the configured 3,008 MB. The function has not been modified in months. Which action will most likely resolve the timeout issue without requiring code changes?

A

Increase the memory allocation of the Lambda function to the maximum available (10,240 MB)

More memory provides more CPU, speeding up the CPU-intensive pandas processing and reducing execution time below the timeout.

B

Increase the function timeout to the maximum allowed (900 seconds is already the max)

C

Use S3 Select to filter columns and rows before invoking the Lambda function

D

Increase the batch size of the S3 event notification to invoke the function with multiple files

Why: Lambda's performance scales with memory allocation: more memory also provides proportional CPU power. For CPU-bound tasks like pandas processing, increasing memory reduces execution time significantly. The function is not memory-constrained (only 80% used), but by increasing memory, the function gets more CPU, which speeds up the processing. The timeout is already at the maximum (15 minutes), so increasing it further is not possible. Using S3 Select would reduce the data read, but that requires code changes to use S3 Select API. Increasing the batch size of the S3 event notification is irrelevant; the function processes one file per invocation.

A developer is troubleshooting an AWS Lambda function that processes records from an Amazon Kinesis Data Stream. The function is configured with a batch size of 100 and a parallelization factor of 1. The developer notices that the iterator age is increasing, indicating that the function is not keeping up with the stream. CloudWatch Logs show that the function is not experiencing errors or throttling, but the execution time per invocation is close to the 5-minute timeout. The stream has 10 shards. Which action will most likely increase processing throughput?

A

Increase the batch size to 500.

B

Increase the parallelization factor to 10.

C

Increase the Lambda function memory and CPU allocation.

Increasing memory increases CPU allocation proportionally, which can make each invocation faster. This reduces the per-batch processing time, allowing the function to keep up with the stream and decrease the iterator age.

D

Split the stream into more shards.

Why: The function is slow per invocation, likely due to insufficient CPU or memory. Increasing Lambda memory proportionally increases CPU, which can reduce execution time and allow the function to process more records per second. While increasing the parallelization factor could help by processing multiple batches per shard concurrently, the fundamental bottleneck is the per-invocation processing time. Increasing memory addresses that bottleneck directly.

A developer is troubleshooting an AWS Lambda function that is invoked from an Amazon S3 bucket via event notifications. The function processes images and stores metadata in Amazon DynamoDB. The developer notices that some images are being processed multiple times, resulting in duplicate entries in DynamoDB. The S3 event notification is configured to send events to the Lambda function with the 's3:ObjectCreated:*' event type. The function uses the 'uuid' library to generate a unique ID for each image upon processing. What is the most likely cause of the duplicate processing?

A

S3 event notifications are delivered at least once, and the Lambda function is not idempotent.

S3 can send the same event multiple times. Without idempotency checks (e.g., using the S3 object key as the DynamoDB primary key), each event creates a new item, causing duplicates.

B

The Lambda function's concurrency is set too high, causing race conditions.

C

The DynamoDB table does not have a primary key that prevents duplicates.

D

The S3 bucket is configured with versioning, causing multiple object creation events.

Why: S3 event notifications are delivered on an 'at least once' basis, meaning the same event can be delivered multiple times. If the Lambda function is not idempotent (i.e., it does not check whether a record already exists before writing), duplicate events lead to duplicate processing. Option B is incorrect because concurrency does not cause duplicate events. Option C is incorrect because the UUID is generated each time, so even with a primary key, each duplication creates a new item with a different UUID. Option D is incorrect: while versioning can generate multiple events (e.g., PutObject and CopyObject), the primary cause is duplicate delivery of the same event.

Want more Troubleshooting and Optimization practice?

Practice this domain

Frequently asked questions

How many questions are on the DVA-C02 exam?

The DVA-C02 exam has up to 65 questions and must be completed in 130 minutes. The passing score is 720/1000.

What types of questions appear on the DVA-C02 exam?

The DVA-C02 exam uses multiple-choice, multiple-select, drag-and-drop, and exhibit-based questions. Exhibit questions show CLI output, network diagrams, or routing tables and ask you to interpret them — exactly the format Courseiva uses.

How are DVA-C02 questions organised by domain?

The exam covers 4 domains: Development with AWS Services, Security, Deployment, Troubleshooting and Optimization. Questions are weighted by domain — higher-weight domains appear more on your actual exam.

Are these the actual DVA-C02 exam questions?

No. These are original exam-style practice questions written against the official Amazon Web Services DVA-C02 exam objectives. They are not copied from the real exam. Courseiva focuses on genuine understanding, not memorisation of braindumps.

Ready to practice all 65 DVA-C02 questions?

Courseiva tracks your accuracy per domain and routes you toward weak areas automatically. Free, no account required.