Microsoft · Free Practice Questions · Last reviewed May 2026
24 real exam-style questions organised by domain, each with the correct answer highlighted and a plain-English explanation of why it's right — and why the others are wrong.
A company stores customer names, addresses, and order history. They need to perform complex queries that join customer and order data. Which type of data store is most appropriate for this scenario?
Key-value store
Relational database
Relational databases organize data into tables with defined schemas and support SQL queries including joins, making them ideal for this requirement.
Document database
Graph database
A retail company captures real-time sensor data from IoT devices to detect anomalies and predict equipment failures. The data must be processed immediately as it arrives. Which type of data processing workload best describes this scenario?
Batch processing
Streaming processing
Streaming processing ingests and analyzes data in real time, enabling prompt anomaly detection and failure prediction from IoT sensor feeds.
Online transaction processing (OLTP)
Data warehousing
Which classification of data describes information that has a fixed schema and is organized into rows and columns, such as data found in a relational database table?
Unstructured data
Semi-structured data
Structured data
Structured data conforms to a fixed schema, typically in tables with rows and columns. This is the standard format for relational database systems.
Transformed data
A logistics company stores shipping waybill data as JSON documents. Each document contains fields like 'shipmentId', 'destination', and 'items', but the number of items and the fields within each item can vary between shipments. Which category best describes this type of data?
Operational data
Semi-structured data
JSON documents with optional fields and variable structures are a classic example of semi-structured data, which has some organizational properties but no rigid schema.
Unstructured data
Structured data
A consulting firm collects client information in two forms: a spreadsheet with columns for Name, Address, and Phone Number, and audio recordings of client meetings. Which of the following statements correctly categorizes these data types?
Both the spreadsheet data and the audio recordings are examples of structured data.
The spreadsheet data is structured, and the audio recordings are semi-structured.
The spreadsheet data is structured, and the audio recordings are unstructured.
Correct. The spreadsheet has a fixed schema (columns) making it structured; audio recordings have no defined schema, making them unstructured.
The spreadsheet data is semi-structured, and the audio recordings are unstructured.
A company operates an online store that processes customer orders. When a customer places an order, the system must immediately reduce the inventory count for the purchased items and record the order details. At the end of each month, the company runs reports that aggregate sales data over the past month to analyze trends. Which type of data processing workload best describes the order placement activity?
Transactional processing
Order placement involves immediate, real-time updates to inventory and order records, requiring transactional consistency and ACID properties. This is a classic example of an Online Transaction Processing (OLTP) workload.
Analytical processing
Batch processing
Stream processing
Want more Describe core data concepts practice?
Practice this domainA company is migrating an on-premises SQL Server database to Azure. They want to ensure that database administrators (DBAs) can perform administrative tasks but cannot view sensitive customer data in query results. Which Azure SQL feature should they implement?
Dynamic Data Masking
Always Encrypted
Always Encrypted encrypts data on the client side, so the database never sees plaintext. DBAs cannot access the encryption keys and therefore cannot view the sensitive data.
Transparent Data Encryption
Row-Level Security
A software-as-a-service (SaaS) provider hosts a multi-tenant application with a separate database for each tenant. They anticipate scaling to thousands of tenants and want to minimize cost while allowing tenants to share resources flexibly. Which Azure SQL offering is most suitable?
Azure SQL Database elastic pool
Elastic pools provide a cost-effective way to manage and scale multiple databases with fluctuating resource needs, ideal for multi-tenant SaaS scenarios.
Azure SQL Database (single database)
Azure SQL Managed Instance
SQL Server on Azure Virtual Machine
A company runs an e-commerce application backed by an on-premises SQL Server database. They plan to migrate to Azure SQL Database and require automatic failover across two Azure regions for disaster recovery. The application must continue to connect using the same connection string after a failover, with no code changes. Which feature should they implement?
Active Geo-Replication
Elastic pools
Failover groups
Failover groups enable automatic asynchronous replication and automatic failover across regions. The application connects to a listener endpoint that remains unchanged after failover, requiring no code changes.
SQL Server on Azure Virtual Machine with Always On Availability Groups
A company is migrating a legacy on-premises database to Azure. They require the ability to run cross-database queries within the same logical server, full control over database collation settings, and want to minimize management overhead for infrastructure patching. The database size is under 1 TB and they do not need instance-level features like SQL Agent jobs or linked servers. Which Azure SQL offering should they choose?
Azure SQL Database
Azure SQL Database is a PaaS service that handles patching, supports elastic query for cross-database queries, and allows collation settings on a per-database level. It does not include SQL Agent or linked servers, which are not required here.
Azure SQL Managed Instance
SQL Server on Azure Virtual Machine
Azure Synapse SQL pool
A company is migrating an on-premises SQL Server database to Azure. The database uses SQL Server Integration Services (SSIS) packages for daily ETL processes. The company wants to minimize administrative overhead for patching and backup management, but needs to retain full control over instance-level configurations and support for SSIS. Which Azure SQL service should they choose?
Azure SQL Database
Azure SQL Managed Instance
Azure SQL Managed Instance supports SSIS and provides instance-level control with automated patching and backups, minimizing overhead.
Azure Synapse Analytics
Azure SQL Server on Azure Virtual Machines
A startup is developing a web application that requires a relational database with PostgreSQL compatibility. They want a fully managed service that automatically handles backups, patching, and provides high availability with a 99.99% SLA. Which Azure service should they choose?
Azure Database for PostgreSQL
Azure Database for PostgreSQL (Flexible Server) is a fully managed PostgreSQL service with automatic backups, patching, and zone-redundant high availability offering a 99.99% SLA. It is the ideal choice for a PostgreSQL-compatible relational database.
Azure SQL Database
Azure Database for MySQL
Azure Cosmos DB for PostgreSQL
Want more Identify considerations for relational data on Azure practice?
Practice this domainA social media application stores user profile data as JSON documents. Each user's document has a different structure, with fields that vary based on user activity. The application needs to query these documents efficiently using SQL-like syntax and support high write throughput. Which Azure data store is most appropriate for this workload?
Azure SQL Database
Azure Blob Storage
Azure Cosmos DB
Azure Cosmos DB is a globally distributed, multi-model NoSQL database that supports JSON documents natively. It allows flexible schemas, SQL-like querying, and high throughput, making it ideal for this scenario.
Azure Table Storage
A ride-sharing application needs to store real-time GPS location updates from drivers and passengers. The data is ingested as key-value pairs where the key is the user ID and the value is a timestamped location. The application requires low-latency reads and writes for millions of concurrent users, and the data model is simple with no need for complex queries or joins. Which Azure NoSQL database API should be used for this workload?
Azure Cosmos DB Table API
The Table API is designed for key-value storage with simple queries by partition key and row key, providing low-latency access at global scale. It is ideal for this type of high-throughput, simple data access pattern.
Azure Cosmos DB SQL (Core) API
Azure Cosmos DB for MongoDB API
Azure Cosmos DB for Apache Gremlin API
A global social media platform stores user profile images (JPEG) and activity logs in JSON format. The logs have varying structures based on the type of activity. The application requires low-latency reads of images from any region and the ability to query logs using SQL-like syntax. Which Azure data storage solution should they use for each data type?
Azure Table Storage for images and Azure Cosmos DB (Table API) for logs
Azure Blob Storage with a CDN for images and Azure Cosmos DB (SQL API) for logs
Blob Storage efficiently stores unstructured images, and CDN ensures low-latency global access. Cosmos DB SQL API provides SQL-like queries for the varying JSON logs.
Azure Files for images and Azure SQL Database for logs
Azure Disk Storage for images and Azure Cosmos DB (MongoDB API) for logs
A retail company stores product catalog data as JSON documents. Each product has a different set of attributes depending on its category (e.g., electronics have 'voltage', clothing has 'size'). The application needs to query products by category and price range efficiently. Which Azure data store is most appropriate for this workload?
Azure Cosmos DB
Correct. Cosmos DB is a NoSQL database that supports schema-flexible JSON documents and provides fast queries on any attribute, ideal for product catalogs with varying attributes.
Azure SQL Database
Azure Blob Storage
Azure Table Storage
A media company stores large video files and associated metadata (title, duration, tags) as JSON documents. The application requires low-latency streaming of videos to users worldwide and the ability to quickly query metadata by tag. Which combination of Azure services should the company use?
Azure Blob Storage for videos and Azure Cosmos DB for metadata
Correct. Blob Storage handles large video files efficiently, while Cosmos DB provides fast, indexed querying on flexible JSON metadata.
Azure Blob Storage for both videos and metadata
Azure Cosmos DB for videos and Azure Table Storage for metadata
Azure Files for videos and Azure SQL Database for metadata
A global gaming company develops a multiplayer game. Player profile data (username, email, preferences) is stored as simple key-value pairs and must be accessible with single-digit millisecond latency from any region. Game session logs are stored as JSON documents with varying fields (session ID, player actions, timestamps) and must be queryable by player ID and timestamp range using SQL-like syntax. The company wants to use a single Azure database service for both workloads. Which combination of Azure Cosmos DB APIs should they choose?
Table API for profiles and SQL API for logs
The Table API provides key-value storage with single-digit millisecond latencies, ideal for player profiles. The SQL API supports JSON documents and full SQL query syntax, perfect for querying session logs by player ID and timestamp.
SQL API for both profiles and logs
MongoDB API for profiles and Cassandra API for logs
Table API for both profiles and logs
Want more Describe considerations for working with non-relational data on Azure practice?
Practice this domainA manufacturer collects sensor data from thousands of IoT devices every second. The data is ingested into Azure Event Hubs and then needs to be stored for historical analysis. The analytics team will run complex aggregations and time-series queries over petabytes of data, expecting fast results even with large scans. Which Azure service should be used as the analytical data store?
Azure Data Lake Storage Gen2
Azure SQL Database
Azure Synapse Analytics dedicated SQL pool
Azure Synapse Analytics dedicated SQL pool uses MPP and columnar storage to execute complex queries over huge datasets efficiently. It is purpose-built for large-scale data warehousing and analytical workloads.
Azure Cosmos DB
A manufacturing company has a streaming data pipeline that ingests sensor data from factory equipment into Azure Event Hubs. The data must be prepared for reporting by cleaning invalid records, removing duplicates, and aggregating readings into 5-minute windows. The transformed data needs to be stored in a columnar format in a data lake to support efficient querying by data analysts using SQL. Which Azure service should perform the data transformation and loading?
Azure Data Factory
Azure Databricks
Azure Stream Analytics
Azure Stream Analytics is a serverless real-time analytics service that can ingest data from Event Hubs, perform time-windowed aggregations, clean data, and output to Azure Data Lake Storage in the desired columnar format. It is the most straightforward and cost-effective choice for this streaming ETL scenario.
Azure Synapse Pipelines
A data analytics team stores sales transaction data in Parquet files in Azure Data Lake Storage Gen2. They want to run complex analytical queries that join this data with dimension tables stored in Azure Synapse Analytics dedicated SQL pool. The team prefers not to move or copy the data from the data lake. Which feature should they use to query the data lake data directly?
Azure Data Factory pipelines
PolyBase external tables
PolyBase enables Synapse to create external tables that query data in the data lake without moving it.
Azure Stream Analytics
Azure Databricks notebooks
A healthcare analytics company receives continuous streams of patient monitoring data from IoT devices. The data must be processed in near real-time to detect critical events (e.g., abnormal heart rate). Processed data is then stored in a columnar format for historical analysis and reporting by data analysts using SQL. Which combination of Azure services should they use for ingestion, processing, and storage?
Azure Event Hubs, Azure Stream Analytics, Azure Synapse Analytics
Event Hubs ingests data in real-time. Stream Analytics processes the stream to detect events and transform data. Synapse Analytics provides a columnar data warehouse for historical analysis. This combination fits the requirements exactly.
Azure IoT Hub, Azure Data Factory, Azure SQL Data Warehouse
Azure Event Hubs, Azure Stream Analytics, Azure Cosmos DB
Azure Blob Storage, Azure Databricks, Azure Table Storage
A retail chain collects daily sales data from hundreds of stores. The data is stored as CSV files in Azure Data Lake Storage Gen2. The analytics team needs to run complex SQL queries that join sales data with product dimensions and aggregate results across petabytes of data. Queries must return results within seconds. Which Azure service is best suited for this analytical workload?
Azure Synapse Analytics
Correct. Synapse Analytics provides a SQL-based engine optimized for large-scale analytical queries and can directly query data in Data Lake Storage with PolyBase or CETAS.
Azure SQL Database
Azure Analysis Services
Azure HDInsight
A financial analytics company has petabytes of transaction data stored as Parquet files in Azure Data Lake Storage Gen2. Data analysts need to run complex SQL queries that join multiple tables and return results within seconds. The company wants to query the data directly without moving it to another store. Which Azure service should they use?
Azure SQL Database
Azure Synapse Serverless SQL pool
Serverless SQL pool can directly query Parquet files in the data lake using standard T-SQL and scales automatically for large datasets.
Azure HDInsight
Azure Databricks
Want more Describe an analytics workload on Azure practice?
Practice this domainThe DP-900 exam has up to 60 questions and must be completed in 60 minutes. The passing score is 700/1000.
The DP-900 exam uses multiple-choice, multiple-select, drag-and-drop, and exhibit-based questions. Exhibit questions show CLI output, network diagrams, or routing tables and ask you to interpret them — exactly the format Courseiva uses.
The exam covers 4 domains: Describe core data concepts, Identify considerations for relational data on Azure, Describe considerations for working with non-relational data on Azure, Describe an analytics workload on Azure. Questions are weighted by domain — higher-weight domains appear more on your actual exam.
No. These are original exam-style practice questions written against the official Microsoft DP-900 exam objectives. They are not copied from the real exam. Courseiva focuses on genuine understanding, not memorisation of braindumps.
Courseiva tracks your accuracy per domain and routes you toward weak areas automatically. Free, no account required.