A manufacturing company deploys IoT sensors on equipment in a factory. They need to monitor sensor data in real time to detect anomalies and trigger immediate alerts. They also need to store years of historical sensor data for monthly capacity planning reports that involve complex aggregations. The company wants a cost-effective solution that minimizes data movement between storage and compute. Which combination of Azure services should they use for real-time processing and historical batch analytics?
Answer choices
Why each option matters
Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.
Best answer
A. Azure Stream Analytics for real-time processing, Azure Data Lake Storage Gen2 for historical storage, and Azure Synapse Analytics for batch queries.
This combination correctly pairs a real-time stream processing engine (Stream Analytics) with a scalable data lake (Data Lake Storage) and an analytics service (Synapse Analytics) that can query the lake directly, minimizing data movement.
Distractor review
B. Azure Data Factory for real-time processing, Azure Cosmos DB for historical storage, and Power BI for batch queries.
Azure Data Factory is an orchestration tool, not a real-time stream processor. Cosmos DB is a transactional database and not cost-effective for large-scale historical storage. Power BI is a visualization tool, not a batch query engine.
Distractor review
C. Azure Functions for real-time processing, Azure Table Storage for historical storage, and Azure Analysis Services for batch queries.
Azure Functions can handle stream processing for low-throughput scenarios but lacks built-in support for windowed aggregations and exactly-once semantics common in industrial streaming. Azure Table Storage is a key-value store not suited for complex analytical queries. Azure Analysis Services requires data to be loaded into a model, incurring additional data movement.
Distractor review
D. Azure Event Hubs for real-time processing, Azure SQL Database for historical storage, and Azure Machine Learning for batch queries.
Event Hubs is a data ingestion service, not a processing engine. Azure SQL Database is not cost-effective for storing terabytes to petabytes of historical data and has limited capacity for complex aggregations. Azure Machine Learning is for predictive modeling, not general-purpose batch analytics.
Common exam trap
Common exam trap: NAT rules depend on direction and matching traffic
NAT is not only about the public address. The inside/outside interface roles and the ACL or rule that matches traffic are just as important.
Technical deep dive
How to think about this question
NAT questions usually test address translation, overload/PAT behaviour, static mappings and whether the right traffic is being translated. Read the interface direction and address terms carefully.
KKey Concepts to Remember
- Static NAT maps one inside address to one outside address.
- PAT allows many inside hosts to share one public address using ports.
- Inside local and inside global describe the private and translated addresses.
- NAT ACLs identify traffic for translation, not always security filtering.
TExam Day Tips
- Identify inside and outside interfaces first.
- Check whether the scenario needs static NAT, dynamic NAT or PAT.
- Do not confuse NAT matching ACLs with normal packet-filtering intent.
Related practice questions
Related DP-900 practice-question pages
Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.
More questions from this exam
Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.
Question 1
A data engineer needs to process streaming data from IoT devices and store the results in Azure Data Lake Storage for long-term analytics. The data must be processed in near real-time to detect anomalies and trigger alerts. Which Azure service should the engineer use for stream processing?
Question 2
A data engineer needs to query data stored in CSV files in Azure Data Lake Storage Gen2 using T-SQL in Azure Synapse Analytics, without loading the data into the database. Which feature should they use?
Question 3
A data engineer needs to process raw clickstream data from multiple websites that is stored in Azure Blob Storage as JSON files. The processing must run automatically every hour, transform the data into a structured format for reporting, and handle schema changes in the source data without manual intervention. Which Azure service should be used?
Question 4
A data engineer is designing a data lake architecture in Azure. They plan to first ingest raw data from various sources into a landing zone in Azure Data Lake Storage Gen2. Then they will clean, validate, and deduplicate that data in a second zone. Finally, they will create aggregated, business-ready datasets in a third zone for analysts. This layered approach is known as which architecture?
Question 5
A data engineer needs to transform large datasets stored in Azure Data Lake Storage Gen2 using Python and Apache Spark. They want a serverless compute option that automatically scales and requires no cluster management. Which Azure service should they use?
Question 6
A company collects customer feedback forms. Each form contains always-present fields like CustomerID and SubmissionDate, but also a free-text Comments field and optional fields like Rating or ProductCategory that vary between forms. How should this data be classified?
FAQ
Questions learners often ask
What does this DP-900 question test?
Static NAT maps one inside address to one outside address.
What is the correct answer to this question?
The correct answer is: A. Azure Stream Analytics for real-time processing, Azure Data Lake Storage Gen2 for historical storage, and Azure Synapse Analytics for batch queries. — Azure Stream Analytics is ideal for real-time processing of streaming data with low latency. Azure Data Lake Storage Gen2 provides a cost-effective, scalable storage for large volumes of historical data. Azure Synapse Analytics (with serverless or dedicated SQL pools) enables complex queries over the data in the lake without requiring data movement. Option B (Azure Data Factory for real-time) is incorrect because Data Factory is for orchestration, not real-time stream processing. Option C (Azure Functions) can handle event-driven processing but is not designed for high-throughput streaming and lacks built-in windowing for aggregations. Option D (Azure SQL Database) is not cost-effective for storing petabytes of historical data and is not optimized for large-scale batch analytics.
What should I do if I get this DP-900 question wrong?
Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.
Discussion
Sign in to join the discussion.