mediummultiple choiceObjective-mapped

A manufacturing company collects sensor readings from thousands of IoT devices. Each reading consists of a device ID, a timestamp, and a numeric value. The data is stored as key-value pairs and must support low-latency reads and writes at a global scale. The company also needs to query the data by device ID and time range. Which Azure Cosmos DB API should they choose?

Question 1mediummultiple choice
Full question →

A manufacturing company collects sensor readings from thousands of IoT devices. Each reading consists of a device ID, a timestamp, and a numeric value. The data is stored as key-value pairs and must support low-latency reads and writes at a global scale. The company also needs to query the data by device ID and time range. Which Azure Cosmos DB API should they choose?

Answer choices

Why each option matters

Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.

A

Distractor review

Core (SQL) API

The Core API supports SQL-like queries and is ideal for JSON documents with varying structure, but it is not optimized for simple key-value access patterns.

B

Distractor review

MongoDB API

The MongoDB API is for document-oriented data and supports a rich query model, but the data here is a simple key-value pair, not a document.

C

Best answer

Table API

The Table API is built for key-value workloads and stores data as items with a partition key and row key. It allows efficient point reads and range queries, making it ideal for IoT sensor data.

D

Distractor review

Gremlin API

The Gremlin API is used for graph data and traversing relationships between nodes, which is not applicable for key-value sensor readings.

Common exam trap

Common exam trap: answer the scenario, not the keyword

Many certification questions include familiar terms but test a specific constraint. Read the exact wording before choosing an answer that is generally true but wrong for this case.

Technical deep dive

How to think about this question

This question should be treated as a scenario, not a definition check. Identify the problem, the constraint and the best action. Then compare each option against those facts.

KKey Concepts to Remember

  • Read the scenario before looking for a memorised answer.
  • Find the constraint that changes the correct option.
  • Eliminate answers that are true in general but not in this case.
  • Use explanations to understand the rule behind the answer.

TExam Day Tips

  • Underline the problem statement mentally.
  • Watch for words such as best, first, most likely and least administrative effort.
  • Review why wrong options are wrong, not only why the correct option is correct.

Related practice questions

Related DP-900 practice-question pages

Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.

More questions from this exam

Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.

Question 1

A data engineer needs to process streaming data from IoT devices and store the results in Azure Data Lake Storage for long-term analytics. The data must be processed in near real-time to detect anomalies and trigger alerts. Which Azure service should the engineer use for stream processing?

Question 2

A data engineer needs to query data stored in CSV files in Azure Data Lake Storage Gen2 using T-SQL in Azure Synapse Analytics, without loading the data into the database. Which feature should they use?

Question 3

A data engineer needs to process raw clickstream data from multiple websites that is stored in Azure Blob Storage as JSON files. The processing must run automatically every hour, transform the data into a structured format for reporting, and handle schema changes in the source data without manual intervention. Which Azure service should be used?

Question 4

A data engineer is designing a data lake architecture in Azure. They plan to first ingest raw data from various sources into a landing zone in Azure Data Lake Storage Gen2. Then they will clean, validate, and deduplicate that data in a second zone. Finally, they will create aggregated, business-ready datasets in a third zone for analysts. This layered approach is known as which architecture?

Question 5

A data engineer needs to transform large datasets stored in Azure Data Lake Storage Gen2 using Python and Apache Spark. They want a serverless compute option that automatically scales and requires no cluster management. Which Azure service should they use?

Question 6

A company collects customer feedback forms. Each form contains always-present fields like CustomerID and SubmissionDate, but also a free-text Comments field and optional fields like Rating or ProductCategory that vary between forms. How should this data be classified?

FAQ

Questions learners often ask

What does this DP-900 question test?

Read the scenario before looking for a memorised answer.

What is the correct answer to this question?

The correct answer is: Table API — Azure Cosmos DB offers multiple APIs. The Table API is designed for key-value workloads with a schema of partition key and row key, providing low-latency access and efficient queries by key. The Core (SQL) API is better for document data, MongoDB API for JSON documents, and Gremlin API for graph data.

What should I do if I get this DP-900 question wrong?

Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.

Discussion

Loading comments…

Sign in to join the discussion.