A company is building a petabyte-scale data lake for analytics. They need a storage solution that supports a hierarchical namespace, POSIX-like permissions (ACLs), and is optimized for big data analytics workloads using Apache Spark and Hive. The data must be accessible over the Azure Blob Storage API. Which Azure data service should they use?
Answer choices
Why each option matters
Good practice is not just finding the correct option. The wrong answers often show the exact trap the exam wants you to fall into.
Distractor review
Azure Blob Storage (with flat namespace)
Standard Blob Storage does not support a hierarchical namespace or native POSIX ACLs, making it less suitable for Hadoop/Spark analytics that rely on directory structures.
Best answer
Azure Data Lake Storage Gen2
ADLS Gen2 combines Blob Storage with a hierarchical namespace and ACLs, enabling Hadoop-compatible access and high-performance analytics with Spark, Hive, and other tools.
Distractor review
Azure NetApp Files
NetApp Files provides NFS and SMB file shares, but it is not designed for petabyte-scale data lakes and does not offer Blob API compatibility for analytics frameworks.
Distractor review
Azure HPC Cache
HPC Cache is a caching service for high-performance computing, not a primary storage solution. It reduces latency but does not provide the namespace or ACL features required.
Common exam trap
Common exam trap: ACLs stop at the first match
ACLs are processed top to bottom. The first matching entry wins, and an implicit deny usually exists at the end.
Technical deep dive
How to think about this question
ACL questions test precision: source, destination, protocol, port and direction. A generally correct ACL can still fail if it is applied on the wrong interface or in the wrong direction.
KKey Concepts to Remember
- Standard ACLs match source addresses.
- Extended ACLs can match source, destination, protocol and ports.
- The first matching ACL entry is used.
- There is usually an implicit deny at the end.
TExam Day Tips
- Check inbound versus outbound direction.
- Read the ACL from top to bottom.
- Look for a broader permit or deny above the intended line.
Related practice questions
Related AZ-305 practice-question pages
Use these pages to review the topic behind this question. This is how one missed question becomes focused revision.
SAA-C03 VPC practice questions
Practise AZ-305 questions linked to SAA-C03 VPC.
SAA-C03 S3 lifecycle policy questions
Practise AZ-305 questions linked to SAA-C03 S3 lifecycle policy questions.
SAA-C03 RDS Multi-AZ questions
Practise AZ-305 questions linked to SAA-C03 RDS Multi-AZ questions.
SAA-C03 IAM policy practice questions
Practise AZ-305 questions linked to SAA-C03 IAM policy.
SAA-C03 Route 53 failover questions
Practise AZ-305 questions linked to SAA-C03 Route 53 failover questions.
SAA-C03 CloudFront practice questions
Practise AZ-305 questions linked to SAA-C03 CloudFront.
SAA-C03 NAT gateway questions
Practise AZ-305 questions linked to SAA-C03 NAT gateway questions.
SAA-C03 VPC endpoint questions
Practise AZ-305 questions linked to SAA-C03 VPC endpoint questions.
SAA-C03 Auto Scaling practice questions
Practise AZ-305 questions linked to SAA-C03 Auto Scaling.
SAA-C03 disaster recovery questions
Practise AZ-305 questions linked to SAA-C03 disaster recovery questions.
SAA-C03 high availability questions
Practise AZ-305 questions linked to SAA-C03 high availability questions.
SAA-C03 cost optimization questions
Practise AZ-305 questions linked to SAA-C03 cost optimization questions.
More questions from this exam
Keep practising from the same exam bank, or move into a focused topic page if this question exposed a weak area.
Question 1
A company is designing hub-and-spoke networking. Spoke VNets must use a central Azure Firewall for outbound internet traffic. Which two configurations are required?
Question 2
A company is designing private access to a PaaS database from workloads in a VNet. The database should not be reachable over its public endpoint. What should be recommended?
Question 3
A data platform must support analytical queries over petabytes of files in a data lake, while preserving hierarchical namespaces and fine-grained ACLs. Which storage service should you design around?
Question 4
A database workload has an RPO of 15 minutes and an RTO of 4 hours. Cost is more important than near-zero data loss. Which design is usually more appropriate than synchronous multi-region replication?
Question 5
A hub-and-spoke Azure network must centralize outbound inspection and still allow spokes to resolve private endpoint DNS names. Which two components are commonly required? (Choose 2.)
Question 6
A multinational company uses Microsoft Entra ID and several Azure subscriptions. Security administrators need to review privileged role assignments every month and require justification for continued access. Which design should be recommended?
FAQ
Questions learners often ask
What does this AZ-305 question test?
Standard ACLs match source addresses.
What is the correct answer to this question?
The correct answer is: Azure Data Lake Storage Gen2 — Azure Data Lake Storage Gen2 (ADLS Gen2) is built on Azure Blob Storage and provides a hierarchical namespace, ACLs, and full integration with Hadoop and Spark. It is designed for large-scale analytics. Standard Blob Storage does not offer a hierarchical namespace or POSIX permissions. Azure NetApp Files is a file share service, not optimized for big data analytics using Blob APIs. Azure HPC Cache accelerates access but is not a primary storage solution.
What should I do if I get this AZ-305 question wrong?
Then try more questions from the same exam bank and focus on understanding why the wrong options are tempting.
Discussion
Sign in to join the discussion.