Microsoft · Free Practice Questions · Last reviewed May 2026

AZ-204 Exam Questions and Answers

30 real exam-style questions organised by domain, each with the correct answer highlighted and a plain-English explanation of why it's right — and why the others are wrong.

60 exam questions
100 min time limit
Pass at 700 / 1000
5 exam domains
1

Domain 1: Develop Azure compute solutions

All Develop Azure compute solutions questions

You are implementing an Azure Durable Functions application that processes orders. The function must call three external APIs (payment gateway, inventory system, and shipping calculator) in parallel, then aggregate the results once all three have completed. Which Durable Functions pattern should you use?

A

Function chaining

B

Fan-out/Fan-in

Fan-out calls multiple activity functions in parallel, and fan-in waits for all to complete before aggregating results.

C

Monitor

D

Human interaction

Why: The Fan-out/Fan-in pattern is designed to execute multiple functions in parallel and then aggregate results. Function chaining runs activities sequentially. Monitor pattern polls an external state. Human interaction waits for a manual action.

A company uses Azure Functions with a consumption plan. The function processes messages from a queue. During peak hours, the function takes longer to execute, and some messages are processed twice. What is the most likely cause?

A

The function timeout is set too low.

B

The queue message visibility timeout is shorter than the function processing time.

Correct. If the visibility timeout expires, the message becomes visible again and can be processed by another instance, resulting in duplicates.

C

The function uses blob output binding incorrectly.

D

The function app is using a premium plan instead of consumption.

Why: When using queue-triggered functions on a consumption plan, the message visibility timeout must be longer than the expected processing time to avoid duplicate processing. Azure Functions uses a dequeue mechanism that can restart processing if the visibility timeout expires before the function completes.

You are deploying a Node.js application to Azure Web Apps for Containers. The application needs to read configuration settings from Azure App Configuration. What is the recommended method to securely connect the app to the configuration store?

A

Store connection string in environment variables.

B

Use Key Vault references in App Settings.

C

Use managed identity.

Correct. Managed identity provides secure authentication without secrets.

D

Hardcode the connection string.

Why: Using a managed identity allows the web app to authenticate to Azure App Configuration without storing any secrets. It is the most secure approach and eliminates credential management.

You are implementing an order processing system using Azure Durable Functions. The function must send notifications to multiple channels (email, SMS, push) in parallel and wait for all to complete before sending a confirmation. Which Durable Functions feature should you utilize?

A

Orchestration trigger with fan-out/fan-in pattern

Correct. The orchestrator can call multiple activity functions in parallel using Task.WhenAll, then aggregate results before proceeding.

B

Entity trigger

C

Activity trigger with retry policy

D

Timer trigger

Why: The fan-out/fan-in pattern in Durable Functions allows an orchestrator function to call multiple activity functions in parallel and then aggregate the results. This is ideal for scenarios where independent tasks must all complete before proceeding. Other options do not provide parallel execution with aggregation: Entity triggers are for stateful objects, Activity triggers with retry handle failures but not parallelism, and Timer triggers are for delays.

You are deploying a sensitive configuration to Azure Container Instances. The configuration must be encrypted at rest and not visible in the container logs. What should you use?

A

Environment variables in the container group

B

Azure Key Vault with managed identity and secret volumes

Correct. This approach ensures secrets are encrypted in Key Vault, mounted as volumes, and not exposed in logs.

C

Azure Files volume mounted into the container

D

ConfigMap in a Kubernetes cluster

Why: Azure Key Vault can store secrets, and you can mount them as volumes in Azure Container Instances using a managed identity. The secrets are encrypted at rest in Key Vault and are not written to container logs. Environment variables, while convenient, may be logged or inspected. Azure Files volumes are not encrypted at rest by default and are visible in logs. ConfigMap is a Kubernetes concept not applicable to ACI.

A company deploys a web application to Azure App Service. They want to deploy a new version of the app with zero downtime and the ability to quickly roll back if needed. Which deployment feature should they use?

A

Auto-scaling

B

Deployment slots

Deployment slots enable staging, warm-up, and swapping with immediate rollback, providing zero-downtime deployments.

C

Traffic Manager

D

Application Insights

Why: Deployment slots in Azure App Service allow you to deploy a new version to a staging slot, swap it with the production slot for zero downtime, and easily swap back for rollback. Auto-scaling handles capacity, Traffic manager routes traffic, and Application Insights provides monitoring, none of which address zero-downtime deployment and rollback.

Want more Develop Azure compute solutions practice?

Practice this domain
2

Domain 2: Develop for Azure storage

All Develop for Azure storage questions

A company stores archival data in Azure Blob Storage. The data is accessed only a few times per year, and retrieval can take up to 15 hours. Which blob access tier minimizes storage costs while meeting these requirements?

A

Hot tier

B

Cool tier

C

Archive tier

Archive tier offers the lowest storage cost and supports retrieval within 1-15 hours, fitting the scenario.

D

Premium tier

Why: The Archive tier is the most cost-effective for long-term storage with infrequent access, and allows retrieval within up to 15 hours. Hot is for frequent access, Cool for moderate access with lower cost, and Premium for low-latency scenarios.

You are building a serverless application that needs to react to insertions and updates in an Azure Cosmos DB container. You want to process these changes using an Azure Function. Which trigger should you configure for the function?

A

Cosmos DB trigger

The Cosmos DB trigger uses the change feed to respond to inserts and updates in the container.

B

Blob trigger

C

Event Grid trigger

D

Service Bus trigger

Why: The Cosmos DB trigger binds to the change feed of a Cosmos DB container, allowing a function to react automatically to document changes. Blob trigger reacts to blob events, Event Grid for event notifications, and Service Bus for message processing.

You are developing an application that writes logs to Azure Blob Storage. Each log entry is small (less than 1 KB) and you need to store millions of entries per day. You want to minimize storage costs and maximize write throughput. Which blob type should you use?

A

Block blobs with a high block size.

B

Append blobs.

Correct. Append blobs are designed for frequent append operations and are ideal for logging.

C

Page blobs.

D

Block blobs with a low block size.

Why: Append blobs are optimized for append operations and are cost-effective for logging scenarios because they allow efficient appending of small blocks. Using block blobs with small writes would incur overhead and higher costs.

You need to upload large files (up to 100 GB) to Azure Blob Storage from a web application. The upload must be resilient to network failures and support pausing/resuming. Which approach should you use?

A

Upload the blob as a single PUT operation.

B

Use block blob with multiple blocks and parallel upload.

Correct. Block blobs support chunked upload with retry and resume capability.

C

Use append blob.

D

Use AzCopy from the server.

Why: Using block blobs with multiple blocks and parallel upload (via the Azure Storage SDK) allows you to upload large files in chunks, enabling retry of individual blocks and resumable uploads.

You need to store millions of small log entries (each <1 KB) per day from an IoT device. The logs are rarely read. Which storage solution is most cost-effective?

A

Azure Blob Storage Block Blob

Correct. Block blobs support high-volume storage with low-cost tiers (Cool/Archive) and can handle billions of objects.

B

Azure SQL Database

C

Azure Table Storage

D

Azure Files

Why: Azure Blob Storage block blobs are ideal for large volumes of unstructured data. By using the Cool or Archive access tier, costs are minimal for infrequently accessed data. Azure SQL Database has high per-GB storage costs and overhead. Table Storage is also cost-effective but has a limit of 1 MB per entity and is more suitable for NoSQL key-value data. Azure Files is for file shares and not optimized for massive volumes of tiny blobs. Block blobs are the best choice.

You are developing an application that writes log entries to Azure Blob Storage. Each log entry is approximately 500 bytes, and you expect to generate millions of entries per day. The logs are rarely read, and when they are read, you need to retrieve ranges of logs sequentially. Which blob type should you use to minimize storage costs and maximize write throughput?

A

Block blobs

B

Append blobs

Append blobs are specifically designed for append operations, providing high write throughput and low cost per write. They are ideal for streaming log data where new entries are continuously added.

C

Page blobs

D

Azure Files shares

Why: Append blobs are optimized for append operations, making them ideal for logging scenarios where data is continuously added. They are more cost-effective and provide higher write throughput for small appends compared to block blobs, which are better for large, random-access data. Page blobs are designed for random read/write operations (e.g., VHDs).

Want more Develop for Azure storage practice?

Practice this domain
3

Domain 3: Implement Azure security

All Implement Azure security questions

You have multiple Azure virtual machines that need to access the same Azure Key Vault to retrieve certificates. You want to minimize administrative overhead while ensuring each VM can authenticate without managing credentials. Which identity type should you use?

A

System-assigned managed identity on each VM

B

User-assigned managed identity assigned to each VM

A single user-assigned identity can be assigned to all VMs. You grant Key Vault access once, reducing overhead.

C

Service principal with client secret stored in each VM

D

Storage account key

Why: User-assigned managed identities can be shared across multiple Azure resources. You create one identity, assign it to all VMs, and grant that identity access to Key Vault once. System-assigned identities are per-resource and would require granting access for each VM individually, increasing overhead.

A developer accidentally deleted a secret from Azure Key Vault. Soft-delete is enabled with a retention period of 90 days. After 60 days, you attempt to recover the secret. What should you do?

A

Run the Azure CLI command: az keyvault secret recover

This command restores the secret while within the soft-delete retention window (60 days out of 90).

B

Enable purge protection on the Key Vault first, then recover the secret.

C

Recover is not possible because the retention period of 90 days has not elapsed.

D

Run the Azure CLI command: az keyvault secret undelete

Why: Soft-delete retains the secret for the configured retention period. Within that period, you can recover the secret using commands such as 'az keyvault secret recover' or the equivalent in Azure portal. Purge protection is not required for recovery. The retention period of 90 days is sufficient since only 60 days have passed.

A company stores sensitive data in an Azure Storage account. They need to restrict access based on the client's IP address and require that clients use a valid SAS token. Which mechanism should they use?

A

Microsoft Entra ID authentication.

B

Shared Key.

C

SAS token with IP ACL.

Correct. A SAS token can specify an allowed IP address range.

D

Firewall and virtual networks.

Why: Shared Access Signatures (SAS) can include an IP address range restriction, allowing the token to limit access to specific IPs. This combines authentication and IP filtering in a single token.

You are developing an application that stores user secrets. You need to ensure that the secrets are encrypted at rest and rotated automatically. Which Azure service should you integrate?

A

Azure Storage.

B

Azure Key Vault.

Correct. Key Vault is designed for secret management with encryption and rotation capabilities.

C

Azure Security Center.

D

Microsoft Entra ID.

Why: Azure Key Vault provides encryption at rest for secrets, keys, and certificates, and supports automatic rotation policies for certificates and secrets.

You have an Azure Function app that needs to retrieve a secret from Azure Key Vault at runtime. You want to avoid storing any credentials in code or configuration. Which mechanism should you use?

A

Service principal with client secret

B

Managed identity

Correct. Managed identity allows the Function app to authenticate to Azure Key Vault without any stored credentials.

C

Access key

D

Shared access signature (SAS)

Why: Managed Identity provides an automatically managed identity in Microsoft Entra ID for the Azure resource. You can assign a system-assigned managed identity to the Function app, grant it access to Key Vault via access policies, and then retrieve secrets without storing any credentials. Other options require storing secrets or keys: a service principal with client secret needs the secret stored; access keys and SAS tokens must be stored in configuration.

A developer deleted a secret from Azure Key Vault with soft-delete and purge protection enabled (retention 90 days). After 50 days, the secret is needed again. What is the correct recovery method?

A

Purge the secret and then restore from a backup

B

Recover the secret using Azure CLI 'az keyvault secret recover'

Correct. Soft-delete allows recovery within the retention period using the recover command.

C

Recreate the secret with the same name

D

Use an Azure Resource Manager template to undelete the secret

Why: Soft-delete allows recovery of deleted secrets within the retention period (90 days). You can recover the secret using Azure CLI `az keyvault secret recover` or the equivalent PowerShell/REST API. Purging is permanent and should be avoided. Recreating the secret with the same name is not possible until the soft-deleted secret is recovered or the retention period expires. ARM templates cannot undelete soft-deleted secrets.

Want more Implement Azure security practice?

Practice this domain
4

Domain 4: Monitor, troubleshoot, and optimize Azure solutions

All Monitor, troubleshoot, and optimize Azure solutions questions

An e-commerce application emits a high volume of telemetry data to Azure Application Insights. You need to reduce the cost of data ingestion while preserving statistical accuracy for performance metrics. Which sampling technique should you use?

A

Adaptive sampling

Adaptive sampling dynamically tunes the sampling rate to keep data volume manageable while preserving statistical validity.

B

Fixed-rate sampling with a 1% rate

C

Ingestion sampling

D

Head-based sampling

Why: Adaptive sampling automatically adjusts the sampling rate based on the volume of telemetry, maintaining accuracy while reducing cost. Fixed-rate sampling may over-sample during low volume or under-sample during high volume. Ingestion sampling and head-based sampling are not standard terms for Application Insights.

You need to monitor the real-time CPU utilization of an Azure virtual machine. Which Azure Monitor feature is designed for this purpose?

A

Metrics

Metrics provide real-time numerical values such as CPU usage, ideal for monitoring performance.

B

Logs

C

Alerts

D

Workbooks

Why: Azure Monitor Metrics collects numerical time-series data (like CPU percentage) at near real-time intervals. Logs are for query-based analysis of events. Alerts notify on conditions but do not display the data in real-time. Workbooks are for visual reports combining multiple data sources.

You have an Azure App Service web app that experiences intermittent slowness. You enable Application Insights and notice that the "Failed Requests" metric is low, but "Server Response Time" is high for a subset of requests. You want to identify the specific code path causing the delay. Which feature should you use?

A

Live Metrics.

B

Snapshot Debugger.

C

Profiler.

Correct. Profiler traces requests and identifies slow code paths.

D

Availability tests.

Why: Application Insights Profiler captures execution traces for slow requests, showing line-by-line timing of code paths. It is designed to diagnose performance bottlenecks without requiring code changes.

An Azure Function processes events from Event Hubs. You need to monitor the number of events that were successfully processed and those that were dropped due to processing errors. Which approach should you use?

A

Custom metrics in Application Insights.

Correct. You can use the Application Insights SDK to log custom events or metrics for processed and dropped events.

B

Event Hubs metrics.

C

Stream Analytics job.

D

Log Analytics query on function logs.

Why: You can log custom metrics or events from the function code to track successes and failures, then visualize them in Application Insights or Azure Monitor. This gives granular visibility.

Your e-commerce application sends telemetry to Application Insights. You need to reduce ingestion costs while preserving the ability to detect trends in performance metrics. Which sampling type should you configure?

A

Fixed-rate sampling

B

Adaptive sampling

Correct. Adaptive sampling dynamically adjusts to keep the telemetry volume within a budget while preserving statistical accuracy for trends.

C

Ingestion sampling

D

Head-based sampling

Why: Adaptive sampling automatically adjusts the sampling rate based on the total volume of telemetry, keeping it within a default target (e.g., 5 events per second). It preserves the relative frequency of different event types, so performance trends remain visible. Fixed-rate sampling uses a constant rate but may miss peaks. Ingestion sampling happens at the server and can lose context; head-based sampling is a fixed rate at the SDK.

You need to monitor the CPU utilization of an Azure VM in real-time and set up an alert when it exceeds 90%. Which Azure Monitor feature should you use?

A

Log Analytics Workspace

B

Metrics Explorer

Correct. Metrics Explorer provides near real-time platform metrics and supports creating metric alerts.

C

Application Insights

D

Azure Monitor for VMs

Why: Metrics Explorer in Azure Monitor allows you to view and analyze platform metrics like CPU percentage from Azure VMs. You can create a metric alert rule to notify when the CPU exceeds a threshold. Log Analytics Workspace is for querying log data, not real-time metrics. Application Insights is for application-level telemetry. Azure Monitor for VMs provides health and performance views but is not the primary tool for metric alerts; Metrics Explorer is the direct feature.

Want more Monitor, troubleshoot, and optimize Azure solutions practice?

Practice this domain
5

Domain 5: Connect to and consume Azure services and third-party services

All Connect to and consume Azure services and third-party services questions

A retail system uses Azure Service Bus to process orders. Each order has multiple messages (e.g., payment, shipping, confirmation) that must be processed in sequence. You need to guarantee that all messages belonging to the same order are handled by the same consumer in order. Which Service Bus feature should you use?

A

Sessions

Sessions ensure FIFO ordering and guarantee that messages with the same session ID are processed by a single consumer.

B

Scheduled messages

C

Dead-letter queue

D

Auto-forwarding

Why: Service Bus sessions enable grouping of messages into a logical sequence. Messages within a session are delivered in order and processed by a single consumer. Scheduled messages are for delayed delivery. Dead-letter queue stores undeliverable messages. Auto-forwarding chains queues/topics together.

You manage an API in Azure API Management. You need to cache API responses such that different responses are returned based on the product subscription key used by the caller. Which set of policies should you implement?

A

Set a 'cache-lookup' policy in the inbound section and a 'cache-store' policy in the outbound section, using the subscription key as a cache vary-by parameter.

This is the correct pattern: lookup cache on request, store on response, varying by subscription key.

B

Set a 'cache-store' policy in the inbound section and a 'cache-lookup' policy in the outbound section.

C

Set both 'cache-lookup' and 'cache-store' policies in the inbound section.

D

Set only a 'cache-store' policy in the backend section.

Why: To cache responses per subscription (product), you use inbound 'cache-lookup' policy to check the cache with a vary-by-subscription-id key, and outbound 'cache-store' policy to store the response. The inbound section is for looking up the cache; the outbound section is for storing the response after backend processing. Setting both in inbound or only outbound would not work correctly.

A company uses Azure Logic Apps to integrate with a third-party REST API. The API has a rate limit of 100 requests per minute. You need to ensure that the Logic App respects this limit. Which connector feature should you configure?

A

Retry policy.

B

Concurrency control.

Correct. Concurrency control limits the number of in-flight requests, helping to stay within rate limits.

C

Swagger connector.

D

API Management.

Why: Concurrency control in Logic Apps allows you to limit the number of concurrent calls, which can be used to throttle requests to stay within the API's rate limit.

You are building an API that needs to send notifications to multiple subscribers. Each subscriber has a different callback URL, and you need to ensure each notification is sent exactly once and retried on failure. Which Azure service should you use?

A

Azure Event Grid.

Correct. Event Grid delivers events to multiple subscribers with retry and exactly-once semantics.

B

Azure Service Bus.

C

Azure Notification Hubs.

D

Azure Queue Storage.

Why: Azure Event Grid is a fully managed event routing service that supports reliable delivery with retry and dead-lettering. It can fan out events to multiple subscribers (webhooks) with exactly-once delivery guarantees.

You manage an API in Azure API Management. The API response varies depending on the caller's subscription key. You need to cache responses per subscription key to reduce backend load. Which policy configuration should you use?

A

Set cache key to include the subscription key

Correct. Using a policy like <cache-lookup vary-by-key="@(context.Subscription.Id)" /> caches different responses per subscription.

B

Use a global cache with no variation

C

Disable caching and rely on the backend

D

Use rate limiting policy

Why: By default, API Management caching treats all requests to the same URL as identical. To differentiate responses by subscription key, you must customize the cache key using the <cache-lookup> policy and include the subscription key. This ensures each subscription key gets its own cached response. Global caching without variation would serve incorrect cached data to different callers.

You have an order processing system using Azure Service Bus. Each order generates multiple messages that must be processed in order and by the same consumer. Which Service Bus feature ensures this?

A

Message sessions

Correct. Sessions guarantee ordered, first-in-first-out (FIFO) delivery and that messages in a session are handled by a single consumer.

B

Topics and subscriptions

C

Dead-letter queues

D

Auto-forwarding

Why: Service Bus sessions enable strict message ordering and guarantee that all messages in a session are delivered to the same consumer (or compete among consumers in a session-aware scenario). By setting the SessionId property on messages belonging to the same order, you ensure FIFO processing. Topics and subscriptions provide pub/sub but not ordering; dead-letter queues hold undeliverable messages; auto-forwarding chains queues but does not enforce ordering.

Want more Connect to and consume Azure services and third-party services practice?

Practice this domain

Frequently asked questions

How many questions are on the AZ-204 exam?

The AZ-204 exam has up to 60 questions and must be completed in 100 minutes. The passing score is 700/1000.

What types of questions appear on the AZ-204 exam?

The AZ-204 exam uses multiple-choice, multiple-select, drag-and-drop, and exhibit-based questions. Exhibit questions show CLI output, network diagrams, or routing tables and ask you to interpret them — exactly the format Courseiva uses.

How are AZ-204 questions organised by domain?

The exam covers 5 domains: Develop Azure compute solutions, Develop for Azure storage, Implement Azure security, Monitor, troubleshoot, and optimize Azure solutions, Connect to and consume Azure services and third-party services. Questions are weighted by domain — higher-weight domains appear more on your actual exam.

Are these the actual AZ-204 exam questions?

No. These are original exam-style practice questions written against the official Microsoft AZ-204 exam objectives. They are not copied from the real exam. Courseiva focuses on genuine understanding, not memorisation of braindumps.

Ready to practice all 60 AZ-204 questions?

Courseiva tracks your accuracy per domain and routes you toward weak areas automatically. Free, no account required.