A social media company stores user posts in Azure Cosmos DB. Each post document contains fields like postId, userId, content, timestamp, and an array of comments. The comments array can grow large (hundreds per post), and the application frequently retrieves a post without its comments to display in a feed. To optimize read performance and minimize request units (RU) consumption, which data modeling approach should the company adopt?
Answer choices
Why each option matters
Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.
Distractor review
A. Store comments in a separate container to isolate the data.
Storing comments in a different container is a possible solution but adds overhead in terms of cross-container queries and separate RU billing. The recommended pattern in Cosmos DB is to use separate documents within the same container with a reference field, which is simpler and more efficient.
Best answer
B. Store comments as separate documents and reference them from the post document via a comments array of IDs.
This approach decouples comments from the post document. When retrieving a post for the feed, the application reads only the post document, avoiding the large comments array. This reduces RU consumption and improves latency. Comments can be loaded on demand when needed.
Distractor review
C. Use a vertical partition within the same document to separate the comments array.
Vertical partitioning within a document is not a concept in NoSQL databases like Cosmos DB. Documents are stored as a single JSON object; there is no way to selectively retrieve parts of a document without reading the entire thing.
Distractor review
D. Migrate the data to Azure SQL Database to use normalized tables and indexes.
Migrating to a different database platform is not a data modeling optimization for Cosmos DB. The scenario explicitly involves Cosmos DB, and the best practice is to model the data appropriately within the same service.
Common exam trap
Common exam trap: answer the scenario, not the keyword
Many certification questions include familiar terms but test a specific constraint. Read the exact wording before choosing an answer that is generally true but wrong for this case.
Technical deep dive
How to think about this question
This question should be treated as a scenario, not a definition check. Identify the problem, the constraint and the best action. Then compare each option against those facts.
KKey Concepts to Remember
- Read the scenario before looking for a memorised answer.
- Find the constraint that changes the correct option.
- Eliminate answers that are true in general but not in this case.
- Use explanations to understand the rule behind the answer.
TExam Day Tips
- Underline the problem statement mentally.
- Watch for words such as best, first, most likely and least administrative effort.
- Review why wrong options are wrong, not only why the correct option is correct.
Related practice questions
Related DP-900 practice-question pages
Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.
More questions from this exam
Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.
Question 1
A data engineer needs to process streaming data from IoT devices and store the results in Azure Data Lake Storage for long-term analytics. The data must be processed in near real-time to detect anomalies and trigger alerts. Which Azure service should the engineer use for stream processing?
Question 2
A data engineer needs to query data stored in CSV files in Azure Data Lake Storage Gen2 using T-SQL in Azure Synapse Analytics, without loading the data into the database. Which feature should they use?
Question 3
A data engineer needs to process raw clickstream data from multiple websites that is stored in Azure Blob Storage as JSON files. The processing must run automatically every hour, transform the data into a structured format for reporting, and handle schema changes in the source data without manual intervention. Which Azure service should be used?
Question 4
A data engineer is designing a data lake architecture in Azure. They plan to first ingest raw data from various sources into a landing zone in Azure Data Lake Storage Gen2. Then they will clean, validate, and deduplicate that data in a second zone. Finally, they will create aggregated, business-ready datasets in a third zone for analysts. This layered approach is known as which architecture?
Question 5
A data engineer needs to transform large datasets stored in Azure Data Lake Storage Gen2 using Python and Apache Spark. They want a serverless compute option that automatically scales and requires no cluster management. Which Azure service should they use?
Question 6
A company collects customer feedback forms. Each form contains always-present fields like CustomerID and SubmissionDate, but also a free-text Comments field and optional fields like Rating or ProductCategory that vary between forms. How should this data be classified?
FAQ
Questions learners often ask
What does this DP-900 question test?
Read the scenario before looking for a memorised answer.
What is the correct answer to this question?
The correct answer is: B. Store comments as separate documents and reference them from the post document via a comments array of IDs. — In Azure Cosmos DB, embedding large arrays in documents can lead to high RU consumption when the parent document is retrieved frequently but the array is not needed. The recommended approach is to model related data as separate documents and use a reference to link them. This allows the application to read only the post document without the large comments array, improving performance and reducing cost. Option C suggests a similar approach but is less precise, while option A introduces unnecessary complexity with separate containers. Option D is not a schema optimization but a migration. Thus, Option B is the best choice.
What should I do if I get this DP-900 question wrong?
Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.
Discussion
Sign in to join the discussion.