
Simulate the real exam experience with 50 questions and a 45-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
You need to design and model a database by using a graphical tool that supports project-oriented offline database development. What should you use?
SSDT is designed for project-oriented database development. It lets you model and develop a database schema offline in a SQL Server Database Project, validate/build the schema, and produce deployment artifacts like DACPACs. It integrates well with source control and CI/CD pipelines, making it the best match for “graphical tool” plus “offline, project-based” database development for SQL Server and Azure SQL.
SSMS is primarily an administrative and query tool for SQL Server and Azure SQL. It excels at connected tasks such as running T-SQL queries, configuring security, monitoring, backups, and troubleshooting. While you can script objects and do some design work, it is not centered on offline, project-based development with build validation and DACPAC-based deployments.
Azure Databricks is a managed Apache Spark platform used for analytics workloads, data engineering, and machine learning with notebooks and distributed processing. It is not intended for relational database schema modeling or offline database project development. Databricks may interact with databases, but it doesn’t provide a database-project modeling experience like SSDT.
Azure Data Studio is a cross-platform SQL editor and management tool with extensions (including some schema and database tooling). However, it is mainly used for querying and managing databases in a connected manner and does not provide the classic SSDT-style database project workflow (offline schema build/validation and DACPAC-centric deployment) as its primary capability.
Core concept: The question is about designing and modeling a relational database using a graphical tool that supports project-oriented, offline database development. This points to a “database project” workflow where schema objects (tables, views, stored procedures) are treated as source-controlled artifacts and can be developed/validated without being continuously connected to a live database. Why the answer is correct: Microsoft SQL Server Data Tools (SSDT) is the Microsoft tooling specifically built for project-based database development. In SSDT, you create a SQL Server Database Project where the database schema is represented as files in a project. You can design schema using designers and T-SQL scripts, validate builds (schema compilation), and generate deployment artifacts (e.g., DACPAC) for consistent deployments to SQL Server or Azure SQL Database. This aligns exactly with “project-oriented offline database development.” Key features and best practices: SSDT supports schema compare, data-tier application (DACPAC) creation, and deployment/publish profiles. This enables DevOps practices: version control, repeatable builds, and automated deployments (CI/CD). From an Azure Well-Architected Framework perspective, this improves operational excellence and reliability by reducing configuration drift and enabling controlled releases. It also supports pre-deployment validation to catch errors early (shift-left). Common misconceptions: SSMS and Azure Data Studio are excellent interactive management/query tools, but they are primarily connected, live-database tools and are not centered on offline, project-based modeling with build validation and DACPAC outputs. Azure Databricks is an analytics and data engineering platform (Spark-based) and not a relational database modeling tool. Exam tips: When you see keywords like “database project,” “offline development,” “schema as code,” “DACPAC,” or “project-oriented,” think SSDT. When you see “manage server,” “run queries,” “backup/restore,” think SSMS. When you see “cross-platform editor,” “extensions,” “lightweight management,” think Azure Data Studio. When you see “Spark notebooks,” “big data processing,” think Databricks.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.
What is a benefit of hosting a database on Azure SQL managed instance as compared to an Azure SQL database?
Built-in high availability is not unique to Azure SQL Managed Instance. Both Azure SQL Database and Managed Instance include built-in HA as part of the PaaS service (replication within a region, automatic failover handling, and service-managed patching). While the implementation details can differ, HA is a shared benefit and therefore not the best comparative advantage for MI in this question.
Azure SQL Managed Instance supports multiple databases within the same instance and provides more native SQL Server-like behavior for cross-database queries (for example, using three-part names) and related transactional patterns. Azure SQL Database is primarily scoped to a single database, and cross-database querying is not natively available in the same way. This instance-level, multi-database capability is a classic differentiator for MI.
System-initiated automatic backups are provided by both Azure SQL Database and Azure SQL Managed Instance. In both services, backups are managed by the platform with configurable retention (including long-term retention options). Because this is a standard PaaS feature across the Azure SQL family, it is not a specific benefit of Managed Instance compared to Azure SQL Database.
Encryption at rest is supported by both Azure SQL Database and Azure SQL Managed Instance, typically through Transparent Data Encryption (TDE) enabled by default. Both services also integrate with Azure Key Vault for customer-managed keys in many scenarios. Since encryption at rest is a baseline security capability for both offerings, it is not a differentiator favoring Managed Instance.
Core concept: This question tests the difference between Azure SQL Database (single database / elastic pool) and Azure SQL Managed Instance (MI). Both are PaaS offerings based on the SQL Server engine, but MI is designed for near-100% SQL Server instance compatibility to simplify “lift-and-shift” migrations from on-premises SQL Server. Why the answer is correct: A key benefit of Azure SQL Managed Instance over Azure SQL Database is native support for cross-database queries and transactions within the same managed instance. In SQL Server and Managed Instance, you can have multiple user databases in one instance and use three-part naming (database.schema.object) and certain cross-database transactional patterns more naturally. Azure SQL Database is scoped to a single database (or an elastic pool of isolated databases) and does not provide the same instance-level, multi-database behavior; cross-database querying is not natively supported in the same way (you typically use external data sources, elastic query patterns, or application-level joins). Key features and best practices: Managed Instance provides instance-scoped features such as SQL Agent, Database Mail, linked servers (with limitations), and easier migration of apps that assume an “instance” with multiple databases. This aligns with Azure Well-Architected Framework principles: it can improve operational efficiency and reduce migration risk (Reliability and Operational Excellence) when an application depends on instance-level capabilities. Common misconceptions: Built-in high availability, automatic backups, and encryption at rest are benefits of both Azure SQL Database and Managed Instance. These are core PaaS capabilities across the Azure SQL family, so they are not differentiators. Exam tips: For DP-900, remember: Azure SQL Database is best when you want a fully managed single database with simple scaling; Azure SQL Managed Instance is best when you need high SQL Server compatibility and instance-level features (especially multiple databases with cross-database capabilities). When you see “cross-database queries/transactions” or “SQL Agent,” think Managed Instance.
You need to store data in Azure Blob storage for seven years to meet your company's compliance requirements. The retrieval time of the data is unimportant. The solution must minimize storage costs. Which storage tier should you use?
Archive is the best choice for long-term retention with minimal storage cost when data is rarely accessed and retrieval latency can be hours. It has the lowest per-GB storage price, but reading data requires rehydration to Hot/Cool and incurs higher access and rehydration costs. This matches compliance archiving scenarios where data is kept for years and retrieved only during audits or investigations.
Hot tier is optimized for frequently accessed data and provides the lowest latency and lowest read/access costs, but it has the highest ongoing storage cost. For seven-year retention where retrieval time is unimportant, paying Hot-tier storage rates would not minimize cost. Hot is appropriate for active datasets, content served to users, or data used regularly by applications.
Cool tier is designed for infrequently accessed data that still needs relatively quick access (typically milliseconds) and is cheaper to store than Hot. However, it is generally more expensive than Archive for multi-year retention and still assumes occasional reads. For a seven-year compliance archive where retrieval time is unimportant, Cool is not the lowest-cost option overall.
Core concept: This question tests Azure Blob Storage access tiers (Hot, Cool, Archive) and how to choose a tier based on access frequency, retrieval latency, and cost. Blob access tiers are designed to optimize the trade-off between storage cost and access (read) cost/latency. Why the answer is correct: You must retain data for seven years for compliance, retrieval time is unimportant, and you must minimize storage costs. The Archive tier is specifically intended for long-term retention of rarely accessed data where you can tolerate hours-long retrieval times. Archive provides the lowest storage cost per GB among the tiers, which is the dominant factor for multi-year retention. Because access is rare and latency is not important, the higher rehydration (restore) time and higher access costs of Archive are acceptable. Key features and best practices: - Archive tier stores blobs offline; to read them you must rehydrate to Hot or Cool, which can take hours. This aligns with “retrieval time unimportant.” - Cost model: Archive minimizes ongoing storage cost but increases retrieval and early deletion costs. For compliance archives, retrieval is typically infrequent, so total cost is usually lowest. - Plan for lifecycle management: use Azure Blob Lifecycle Management rules to automatically move blobs from Hot/Cool to Archive after a period, and to enforce retention patterns. - Consider governance: use immutability policies (WORM) and legal holds if compliance requires tamper resistance; these work with Blob Storage and are commonly paired with long retention. - From an Azure Well-Architected Framework cost optimization perspective, selecting Archive for infrequently accessed long-lived data is a primary lever to reduce ongoing spend. Common misconceptions: Many assume Cool is best for “long-term” storage. Cool is cheaper than Hot but is still intended for data accessed occasionally (e.g., monthly) and has lower latency than Archive. For seven-year retention with unimportant retrieval time, Cool typically costs more over time than Archive. Exam tips: - Hot: frequent access, lowest access cost, highest storage cost. - Cool: infrequent access (days/weeks), lower storage cost than Hot, higher access cost, minimum retention period. - Archive: rare access (months/years), lowest storage cost, highest access cost, and hours-long rehydration. If the question says “retrieval time unimportant” and “minimize storage cost,” think Archive.
HOTSPOT - To complete the sentence, select the appropriate option in the answer area. Hot Area:
Relational data uses ______ to enforce relationships between different tables.
Correct answer: C (keys). Relational data uses keys to enforce relationships between different tables. Specifically, a primary key uniquely identifies each row in a table, and a foreign key in another table references that primary key. The database engine enforces these constraints to maintain referential integrity (for example, preventing an Orders row from referencing a non-existent Customer). This is a defining characteristic of relational databases and is central to how joins work across tables. Why the other options are wrong: - A (collections) is a non-relational concept commonly associated with document databases (for example, Azure Cosmos DB API for MongoDB), not relational tables. - B (columns) define attributes/fields within a table, but columns alone do not enforce relationships; the relationship enforcement comes from key constraints defined on columns. - D (partitions) relate to performance and scalability (splitting data across storage/compute boundaries). Partitioning can improve query performance and manageability, but it does not inherently enforce relationships between tables.
You have an inventory management database that contains the following table. ProductName Quantity Product1 100 Product2 129 Product3 176 Which statement should you use in a SQL query to change the inventory quantity of Product1 to 270?
INSERT adds new rows to a table. Using INSERT for Product1 would create an additional row (duplicate product) rather than changing the existing Quantity value. INSERT is appropriate when the product does not exist yet and you want to add it as a new record. In exam questions, “change” or “modify existing row” points away from INSERT and toward UPDATE.
MERGE combines INSERT/UPDATE (and sometimes DELETE) logic in a single statement by matching source and target rows. It’s useful for upsert scenarios (e.g., load data from staging: update if matched, insert if not). However, it’s overkill here because the row for Product1 already exists and the requirement is a straightforward modification, best handled with UPDATE.
UPDATE is the correct DML statement to modify existing data in a table. You use SET to specify the new value and a WHERE clause to target the correct row, such as WHERE ProductName = 'Product1'. This directly changes Product1’s Quantity from 100 to 270 without creating new rows. It aligns with standard SQL and is commonly tested in DP-900.
CREATE is a DDL statement used to create database objects such as databases, tables, views, and indexes. It does not modify existing data within a table. If the question were about creating the inventory table structure, CREATE TABLE would be relevant. For changing a value in an existing row, CREATE is not applicable.
Core concept: This question tests fundamental relational database DML (Data Manipulation Language) operations in SQL—specifically how to modify existing rows in a table. In relational databases (including Azure SQL Database, SQL Server on Azure VM, and Azure SQL Managed Instance), changing a value in an existing record is done with the UPDATE statement. Why the answer is correct: You already have a row for Product1 with Quantity = 100. The requirement is to change that existing value to 270. UPDATE is designed to modify one or more columns in rows that already exist, typically constrained by a WHERE clause to target the correct row(s). A correct pattern would be: UPDATE Inventory SET Quantity = 270 WHERE ProductName = 'Product1'; The WHERE clause is critical; without it, you risk updating every product’s quantity, which is a common exam and real-world pitfall. Key features / best practices: UPDATE is transactional in relational systems: it can be committed or rolled back, supporting data integrity. In production, you typically ensure ProductName is unique (via a primary key or unique constraint) to avoid accidentally updating multiple rows. From an Azure Well-Architected Framework perspective (Reliability and Security), use constraints, transactions, and least-privilege access (only grant UPDATE permissions to appropriate roles) to reduce risk. Common misconceptions: INSERT is for adding new rows, not modifying existing ones. MERGE can perform “upsert” logic (insert/update/delete based on matching), but it’s unnecessary when you know the row exists and you only need a simple change. CREATE is DDL (Data Definition Language) for defining objects like tables, not changing data. Exam tips: For DP-900, remember the CRUD mapping: Create=INSERT, Read=SELECT, Update=UPDATE, Delete=DELETE. If the question says “change an existing value,” think UPDATE. If it says “add a new record,” think INSERT. If it says “create a table/database,” think CREATE. MERGE is typically used when you need conditional insert/update in one statement (upsert).
You need to ensure that users use multi-factor authentication (MFA) when connecting to an Azure SQL database. Which type of authentication should you use?
Service principal authentication uses an Entra ID application identity (app registration) and is typically non-interactive. MFA is designed for interactive user sign-ins, not for service principals. While service principals can authenticate to Entra ID using secrets/certificates, you don’t “require MFA” for them in the same way; you secure them via credential hygiene, managed identities, and Conditional Access controls for workload identities where applicable.
Azure Active Directory (Microsoft Entra ID) authentication is the only option listed that can directly enforce MFA for Azure SQL Database connections. It integrates SQL with Entra ID identities and supports Conditional Access policies, including requiring MFA. Users connect using Entra ID authentication methods (for example, Universal with MFA), and the MFA challenge is handled by Entra ID during token issuance.
SQL authentication relies on SQL logins (username/password) managed by the SQL engine. Because authentication does not go through Entra ID, you cannot apply Entra ID Conditional Access policies such as MFA requirements. You can improve security with strong passwords, rotation, and Azure SQL protections, but MFA enforcement is not available for SQL authentication.
Certificate authentication is not a standard end-user authentication method for Azure SQL Database connections. Certificates are commonly used to authenticate applications to Entra ID (as a credential for a service principal) or for TLS encryption, but they do not provide an MFA mechanism for user sign-in to Azure SQL. MFA enforcement is tied to Entra ID user authentication flows.
Core Concept: This question tests how to enforce multi-factor authentication (MFA) for connections to Azure SQL Database. MFA is an identity control provided by Microsoft Entra ID (formerly Azure Active Directory). Azure SQL Database supports multiple authentication methods, but only Entra ID-based sign-in can directly leverage Conditional Access policies, including MFA requirements. Why the Answer is Correct: Azure Active Directory (Entra ID) authentication is the correct choice because it integrates Azure SQL Database with Entra ID identities (users, groups, and managed identities) and supports Conditional Access. By using Conditional Access, you can require MFA when users authenticate to the database (for example, when connecting via SSMS, Azure Data Studio, or application sign-in flows that use interactive Entra ID authentication). SQL authentication cannot enforce MFA because it is username/password stored and validated by SQL, not by Entra ID. Key Features / How It’s Enforced: 1) Configure an Entra ID admin for the Azure SQL logical server. 2) Create contained database users mapped to Entra ID identities (CREATE USER FROM EXTERNAL PROVIDER). 3) Use Entra ID authentication methods (e.g., “Azure Active Directory - Universal with MFA” in SSMS/Azure Data Studio). 4) Apply Entra ID Conditional Access policies to require MFA for the relevant users/apps and cloud apps (Azure SQL Database). This aligns with Azure Well-Architected Framework security principles: centralized identity, strong authentication, and conditional access controls. Common Misconceptions: - “Service principal authentication” sounds like it might support MFA, but service principals are non-interactive identities; MFA is for interactive user sign-ins. For workloads, you typically use managed identities/certificates/secrets and restrict access via least privilege and network controls, not MFA. - “Certificate authentication” is not a primary Azure SQL Database authentication mode for end users; certificates are used more commonly for app/service authentication to Entra ID, not direct SQL user MFA. - “SQL authentication” is familiar and simple, but it cannot be governed by Entra ID Conditional Access and therefore cannot require MFA. Exam Tips: If the requirement is “enforce MFA,” think “Entra ID authentication + Conditional Access.” For Azure SQL, MFA is achieved through Entra ID sign-in (interactive) rather than SQL logins. Also remember: DP-900 focuses on conceptual understanding—link MFA to identity provider capabilities (Entra ID), not database-native passwords.
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
Platform as a service (PaaS) database offerings in Azure provide built-in high availability.
Yes. A defining feature of Azure PaaS database offerings is built-in high availability (HA) provided by the service. For example, Azure SQL Database includes service-managed HA with replicas and automatic failover within a region, and can add zone redundancy in supported regions/tier configurations. Azure Cosmos DB replicates data within a region and can be configured for multi-region replication with automatic failover. Azure Database for PostgreSQL/MySQL (Flexible Server) supports zone-redundant HA options in many regions. The important DP-900 point is that HA is not something you build by clustering VMs yourself (as in IaaS); it is integrated into the managed service and exposed through configuration choices (tiers, zone redundancy, geo-replication). Therefore, the statement is true.
Platform as a service (PaaS) database offerings in Azure provide configurable scaling options.
Yes. PaaS database offerings in Azure provide configurable scaling options, typically through service tiers and compute models. Examples include scaling Azure SQL Database by changing service objectives (DTU/vCore), using serverless auto-scale for compute, or scaling out read workloads with readable secondaries in certain offerings. Azure Cosmos DB allows scaling throughput (RU/s) manually or via autoscale, and scaling globally by adding regions. Azure Database for PostgreSQL/MySQL supports scaling compute and storage (with some constraints depending on tier/region). While the exact mechanics differ per service, the exam concept is that PaaS is designed for elasticity: you can adjust capacity without managing VM sizes, disks, or cluster nodes directly. Hence the statement is true.
Platform as a service (PaaS) database offerings in Azure reduce the administrative overhead for managing hardware.
Yes. PaaS databases reduce administrative overhead for managing hardware because Microsoft operates the underlying infrastructure. You do not procure, patch, or replace physical servers; you also typically avoid managing guest OS updates, many platform patches, and much of the backup/restore plumbing. Instead, you select a service tier/size, configure logical settings (network access, authentication, encryption, backups/retention), and focus on data and workload optimization. This aligns with the shared responsibility model: the provider manages the hardware and much of the platform, which improves Operational Excellence by reducing routine maintenance tasks and enabling automation. In contrast, with IaaS (SQL Server on Azure VMs), you are responsible for VM sizing, OS patching, storage layout, clustering/HA setup, and more. Therefore, the statement is true.
HOTSPOT - To complete the sentence, select the appropriate option in the answer area. Hot Area:
A visualization that shows a university's current student enrollment versus the maximum capacity is an example of ______ analytics.
The visualization is an example of descriptive analytics because it summarizes and presents the current state of the university’s enrollment relative to a known threshold (maximum capacity). Descriptive analytics focuses on “what happened” and “what is happening now” by aggregating and visualizing existing data (often as KPIs, dashboards, and reports). Why the other options are incorrect: - Cognitive analytics (A) involves AI-driven understanding or reasoning, such as natural language processing, computer vision, or conversational bots. A simple enrollment vs. capacity chart does not require cognitive capabilities. - Predictive analytics (C) would forecast future enrollment (e.g., predicting next semester’s enrollment or likelihood of exceeding capacity). The prompt describes current enrollment, not a forecast. - Prescriptive analytics (D) would recommend actions (e.g., “open a new section,” “increase capacity,” or “cap admissions”) based on constraints and optimization. The visualization alone is not prescribing decisions; it is reporting status.
What are two characteristics of real-time data processing? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Incorrect. “Data is processed periodically” describes batch processing, where data is collected over a time interval and processed on a schedule (hourly, nightly, etc.). Even if the schedule is frequent, it is still not continuous event-by-event processing. In DP-900, periodic processing is a key indicator of batch analytics rather than real-time streaming.
Correct. Real-time processing is designed to minimize the time between event creation and insight/action. Low latency (often seconds or less) is expected so that dashboards, alerts, and automated responses can react quickly. This is a core differentiator from batch workloads, where longer delays are acceptable.
Incorrect. High latency being acceptable is characteristic of batch or offline processing, such as nightly ETL/ELT jobs, historical reporting, or large-scale transformations where immediacy is not required. Real-time systems specifically aim to reduce latency to support timely decisions and actions.
Correct. Real-time (stream) processing handles data continuously as it is created or received, rather than waiting to accumulate a batch. This event-driven approach enables near-immediate aggregation, filtering, enrichment, and detection of patterns (for example, fraud detection or IoT telemetry monitoring) as events flow through the system.
Core concept: Real-time data processing (often called stream processing) is an analytics workload pattern where data is processed continuously as events arrive, rather than being stored first and processed later in batches. In Azure, this commonly maps to services like Azure Event Hubs or IoT Hub for ingestion, and Azure Stream Analytics, Azure Functions, or Spark Structured Streaming in Azure Databricks/Synapse for processing. Why the answer is correct: Two defining characteristics of real-time processing are (1) data is processed as it is created/arrives (continuous processing), and (2) low latency is expected (results are produced quickly, often in seconds or sub-seconds depending on the design). Therefore, options D and B match the core definition of real-time processing. Key features and best practices: Real-time systems are designed around event streams, partitions, and scalable consumers. They often use windowing (tumbling/hopping/sliding windows) to aggregate events over time, handle late/out-of-order events, and provide near-instant insights for dashboards, alerting, anomaly detection, and operational automation. From an Azure Well-Architected Framework perspective, you design for Reliability (checkpointing, retries, exactly-once/at-least-once semantics), Performance Efficiency (partitioning, autoscale, backpressure handling), and Cost Optimization (right-sizing streaming units/throughput units and retention). Common misconceptions: Many learners confuse “real-time” with “fast batch.” Batch can be frequent (every minute) but is still periodic and not truly event-driven. Also, “high latency is acceptable” describes batch/offline analytics, not real-time. Exam tips: For DP-900, remember the simple contrast: - Batch analytics: periodic processing, minutes-to-hours latency acceptable. - Real-time/stream analytics: continuous processing, low latency expected. If you see wording like “as events arrive,” “immediate insights,” “alerting,” or “low latency,” it’s real-time.
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
Azure Synapse Analytics scales storage and compute independently
Yes. In Azure Synapse Analytics dedicated SQL pool, compute and storage are decoupled. Compute is provisioned and scaled using DWUs (Data Warehouse Units), while data is stored persistently in Azure storage. This means you can scale compute up or down to meet performance needs without having to resize or migrate the underlying stored data. This design supports performance tuning (scale up during peak ETL/ELT or reporting windows) and cost optimization (scale down when demand is low) while keeping the data intact. The alternative (No) would imply that increasing performance requires increasing storage or that storage changes force compute changes, which is not how dedicated SQL pool is designed. Note: this statement is most aligned with the dedicated SQL pool architecture; serverless SQL pool is also inherently decoupled because you pay per query over data in storage.
Azure Synapse Analytics can be paused to reduce compute costs
Yes. A dedicated SQL pool in Azure Synapse Analytics can be paused. When paused, compute resources are released, which stops compute (DWU) charges, helping reduce costs during idle periods (for example, nights/weekends in dev/test or batch-oriented workloads). Importantly, pausing does not delete data: storage remains and continues to incur storage charges. Resuming brings compute back online so queries can run again. Answering No would be incorrect because pausing is a well-known cost-control feature of dedicated SQL pool and is frequently tested. A common nuance: pausing applies to the dedicated SQL pool (provisioned data warehouse). Serverless SQL pool doesn’t “pause” because there is no provisioned compute to pause; it is pay-per-query.
An Azure Synapse Analytics data warehouse has a fixed storage capacity
No. An Azure Synapse Analytics data warehouse (dedicated SQL pool) does not have a fixed storage capacity in the sense of a small, preallocated, unchangeable size. Storage is managed separately from compute and can grow as you load more data, subject to service limits and any organizational constraints (such as quotas, governance policies, or cost controls). This is different from some services/tiers where you choose a fixed maximum database size up front. In Synapse dedicated SQL pool, you primarily choose compute capacity (DWUs) and can scale it; storage persists and expands with data. Selecting Yes would incorrectly suggest you must predefine a fixed storage size that cannot expand, which does not reflect the typical Synapse dedicated SQL pool model.