hardmultiple choiceObjective-mapped

A retail company ingests daily sales data from multiple stores as CSV files stored in Azure Blob Storage. The data must be cleaned and transformed using Spark, then loaded into Azure Synapse Analytics for large-scale reporting. The pipeline must run on a schedule, handle failures with retries, and minimize manual intervention. Which combination of Azure services should they use to orchestrate and execute this pipeline?

Question 1hardmultiple choice
Full question →

A retail company ingests daily sales data from multiple stores as CSV files stored in Azure Blob Storage. The data must be cleaned and transformed using Spark, then loaded into Azure Synapse Analytics for large-scale reporting. The pipeline must run on a schedule, handle failures with retries, and minimize manual intervention. Which combination of Azure services should they use to orchestrate and execute this pipeline?

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Best answer

Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.

Correct. ADF orchestrates the pipeline, Databricks performs Spark-based transformations, and Synapse Analytics serves as the data warehouse for reporting.

B

Distractor review

Azure Stream Analytics, Azure Data Lake Storage, and Power BI.

Stream Analytics is for real-time stream processing, not scheduled batch processing of CSV files. Power BI is a visualization tool, not a transformation or storage engine.

C

Distractor review

Azure Functions, Azure SQL Database, and Azure Analysis Services.

Azure Functions is suitable for small-scale event-driven processing, not complex Spark transformations. Azure SQL Database is not designed for large-scale data warehousing workloads, and Analysis Services is a semantic model layer, not a data warehouse.

D

Distractor review

Azure Logic Apps, Azure HDInsight, and Azure Cosmos DB.

Logic Apps can orchestrate but lacks deep integration with Spark. Azure HDInsight is a managed Hadoop/Spark service but is less integrated than Databricks with ADF. Cosmos DB is a NoSQL database, not suitable for large-scale analytical queries like those run in Synapse.

Common exam trap

Common exam trap: NAT rules depend on direction and matching traffic

NAT is not only about the public address. The inside/outside interface roles and the ACL or rule that matches traffic are just as important.

Technical deep dive

How to think about this question

NAT questions usually test address translation, overload/PAT behaviour, static mappings and whether the right traffic is being translated. Read the interface direction and address terms carefully.

KKey Concepts to Remember

  • Static NAT maps one inside address to one outside address.
  • PAT allows many inside hosts to share one public address using ports.
  • Inside local and inside global describe the private and translated addresses.
  • NAT ACLs identify traffic for translation, not always security filtering.

TExam Day Tips

  • Identify inside and outside interfaces first.
  • Check whether the scenario needs static NAT, dynamic NAT or PAT.
  • Do not confuse NAT matching ACLs with normal packet-filtering intent.

Related practice questions

Related DP-900 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

Question 1

A data engineer needs to process streaming data from IoT devices and store the results in Azure Data Lake Storage for long-term analytics. The data must be processed in near real-time to detect anomalies and trigger alerts. Which Azure service should the engineer use for stream processing?

Question 2

A data engineer needs to query data stored in CSV files in Azure Data Lake Storage Gen2 using T-SQL in Azure Synapse Analytics, without loading the data into the database. Which feature should they use?

Question 3

A data engineer needs to process raw clickstream data from multiple websites that is stored in Azure Blob Storage as JSON files. The processing must run automatically every hour, transform the data into a structured format for reporting, and handle schema changes in the source data without manual intervention. Which Azure service should be used?

Question 4

A data engineer is designing a data lake architecture in Azure. They plan to first ingest raw data from various sources into a landing zone in Azure Data Lake Storage Gen2. Then they will clean, validate, and deduplicate that data in a second zone. Finally, they will create aggregated, business-ready datasets in a third zone for analysts. This layered approach is known as which architecture?

Question 5

A data engineer needs to transform large datasets stored in Azure Data Lake Storage Gen2 using Python and Apache Spark. They want a serverless compute option that automatically scales and requires no cluster management. Which Azure service should they use?

Question 6

A company collects customer feedback forms. Each form contains always-present fields like CustomerID and SubmissionDate, but also a free-text Comments field and optional fields like Rating or ProductCategory that vary between forms. How should this data be classified?

FAQ

Questions learners often ask

What does this DP-900 question test?

Static NAT maps one inside address to one outside address.

What is the correct answer to this question?

The correct answer is: Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. — Azure Data Factory (ADF) is a cloud-based ETL and data integration service that can orchestrate pipelines on a schedule, including executing Spark jobs in Azure Databricks. Azure Databricks provides a managed Spark environment for data transformation. Azure Synapse Analytics (formerly SQL Data Warehouse) is the target for large-scale analytics. Other combinations are less suitable: Stream Analytics is for real-time streaming, Functions is for event-driven code, Logic Apps is for workflows but lacks native Spark support, and HDInsight is an older managed Hadoop/Spark service but Databricks is a more modern choice for Spark. The combination ADF + Databricks + Synapse is the standard modern architecture for batch analytics on Azure.

What should I do if I get this DP-900 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.