CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. Microsoft
  3. PL-300: Microsoft Power BI Data Analyst
PL-300: Microsoft Power BI Data Analyst

Microsoft

PL-300: Microsoft Power BI Data Analyst

363+ 기출 문제 (AI 검증 답안 포함)

Free questions & answers실제 시험 기출 문제
AI-powered explanations상세 해설
Real exam-style questions실제 시험과 가장 유사
363+ 문제 보기

AI 기반

3중 AI 검증 답안 및 해설

모든 PL-300: Microsoft Power BI Data Analyst 답안은 3개의 최고 AI 모델로 교차 검증하여 최고의 정확도를 보장합니다. 선택지별 상세 해설과 심층 문제 분석을 제공합니다.

GPT Pro
Claude Opus
Gemini Pro
선택지별 상세 해설
심층 문제 분석
3개 모델 합의 정확도

시험 도메인

Prepare the Data출제율 27%
Model the Data출제율 27%
Visualize and Analyze the Data출제율 28%
Manage and Secure Power BI출제율 18%

실전 문제

1
문제 1

You have a Microsoft SharePoint Online site that contains several document libraries. One of the document libraries contains manufacturing reports saved as Microsoft Excel files. All the manufacturing reports have the same data structure. You need to use Power BI Desktop to load only the manufacturing reports to a table for analysis. What should you do?

2
문제 2

DRAG DROP - You have a Microsoft Excel workbook that contains two sheets named Sheet1 and Sheet2. Sheet1 contains the following table named Table1. Products abc def ghi jkl mno Sheet2 contains the following table named Table2. Products abc xyz tuv mno pqr stu You need to use Power Query Editor to combine the products from Table1 and Table2 into the following table that has one column containing no duplicate values. Products abc xyz tuv mno pqr stu def ghi jkl Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

파트 1:

Select the correct answer(s) in the image below.

question-image

To create one Products column from both tables, you must first load both Excel tables into Power Query. Next, use Append because append stacks rows from tables with the same schema; Merge is incorrect because it performs a join and is used to add columns based on matching keys. After appending, remove duplicates on the combined query so repeated values such as abc and mno appear only once. Removing errors is irrelevant because the scenario does not mention any errors, and removing duplicates before appending would not eliminate duplicates that exist across both tables.

3
문제 3

You are creating a report in Power BI Desktop. You load a data extract that includes a free text field named coll. You need to analyze the frequency distribution of the string lengths in col1. The solution must not affect the size of the model. What should you do?

4
문제 4

HOTSPOT - You have a Power BI model that contains a table named Sales and a related date table. Sales contains a measure named Total Sales. You need to create a measure that calculates the total sales from the equivalent month of the previous year. How should you complete the calculation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

파트 1:

Sales Previous Year = ______

The correct function to start this measure is CALCULATE because you need to re-calculate an existing measure ([Total Sales]) under a modified filter context (the prior-year dates). CALCULATE is the core DAX function for context transition and filter manipulation, which is exactly what time intelligence requires. Why the others are wrong: - EVALUATE is used in DAX queries (for example, in DAX Studio or SSMS) to return a table result; it is not used to define measures in Power BI. - SUM aggregates a column directly (for example, SUM(Sales[Amount])). Since you already have a measure [Total Sales], you should reuse it rather than re-summing a column. - SUMX is an iterator that evaluates an expression row-by-row over a table. It’s useful for calculated row logic, but it’s unnecessary and less efficient for a simple prior-year version of an existing measure.

파트 2:

[Total Sales], ______(

SAMEPERIODLASTYEAR is the best match for “equivalent month of the previous year.” When the current filter context is a month (for example, March 2025), SAMEPERIODLASTYEAR returns the set of dates for March 2024, preserving the shape of the current period and shifting it back one year. Why the others are wrong: - DATESMTD returns dates from the start of the month to the current date in context (month-to-date), not the same month last year. - PARALLELPERIOD can shift periods (for example, -1 year), but the exam typically expects SAMEPERIODLASTYEAR for standard prior-year comparisons, and it’s more explicit for this scenario. - TOTALMTD is a wrapper that calculates a month-to-date total, which is not requested here (you want total sales for the equivalent month, not MTD).

파트 3:

You should pass the date column from the Date table into SAMEPERIODLASTYEAR, which is `Date`[Date]. Time intelligence functions require a column of type Date (or DateTime) from a proper date table to generate the correct set of shifted dates. Why the others are wrong: - [Date] is ambiguous because it doesn’t specify the table. In DAX, especially in models with multiple tables, you should fully qualify columns to avoid ambiguity and ensure the function uses the date dimension. - `Date`[Month] is not appropriate because it is typically a text or numeric month attribute and does not uniquely identify days. SAMEPERIODLASTYEAR operates on a contiguous set of daily dates; using a month column would break the required granularity and can produce incorrect or invalid results. Putting it together, the intended measure pattern is: Sales Previous Year = CALCULATE([Total Sales], SAMEPERIODLASTYEAR(`Date`[Date]))

5
문제 5

You have a Microsoft Power BI data model that contains three tables named Orders, Date, and City. There is a one-to-many relationship between Date and Orders and between City and Orders. The model contains two row-level security (RLS) roles named Role1 and Role2. Role1 contains the following filter. City[State Province] = "Kentucky" Role2 contains the following filter.

Date[Calendar Year] = 2020 - If a user is a member of both Role1 and Role2, what data will they see in a report that uses the model?

이동 중에도 모든 문제를 풀고 싶으신가요?

Cloud Pass를 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.

6
문제 6

You have a project management app that is fully hosted in Microsoft Teams. The app was developed by using Microsoft Power Apps. You need to create a Power BI report that connects to the project management app. Which connector should you select?

7
문제 7

You import a Power BI dataset that contains the following tables: ✑ Date ✑ Product ✑ Product Inventory The Product Inventory table contains 25 million rows. A sample of the data is shown in the following table. ProductKey DateKey MovementDate UnitCost UnitsIn UnitsOut UnitsBalance 167 20101228 28-Dec-10 0.19 0 0 875 167 20101229 29-Dec-10 0.19 0 0 875 167 20110119 19-Jan-11 0.19 0 0 875 167 20110121 21-Jan-11 0.19 0 0 875 167 20110122 22-Jan-11 0.19 0 0 875 The Product Inventory table relates to the Date table by using the DateKey column. The Product Inventory table relates to the Product table by using the ProductKey column. You need to reduce the size of the data model without losing information. What should you do?

8
문제 8

HOTSPOT - You plan to create the Power BI model shown in the exhibit. (Click the Exhibit tab.)

diagram

The data has the following refresh requirements: ✑ Customer must be refreshed daily. ✑ Date must be refreshed once every three years. ✑ Sales must be refreshed in near real time. ✑ SalesAggregate must be refreshed once per week. You need to select the storage modes for the tables. The solution must meet the following requirements: ✑ Minimize the load times of visuals. ✑ Ensure that the data is loaded to the model based on the refresh requirements. Which storage mode should you select for each table? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

파트 1:

Customer: ______

Customer should be set to Dual. Customer is a dimension table that refreshes daily (so Import is feasible), but the model also contains a near real-time fact table (Sales) that must be DirectQuery. If Customer were Import only, queries that slice the DirectQuery Sales table by Customer could require cross-storage joins and may reduce performance or cause more DirectQuery activity. Dual allows Customer to act as Import for queries that can be answered from cached data (fast visuals) and as DirectQuery when it must interact with the DirectQuery Sales table, improving filter propagation and reducing query complexity. DirectQuery for Customer would meet refresh needs but would unnecessarily slow many visuals; Import alone risks poorer performance/behavior in composite scenarios.

파트 2:

Date: ______

Date should be set to Dual. Date changes extremely infrequently (once every three years), so Import would easily meet the refresh requirement and would be fast. However, because Sales is DirectQuery, Date is used to filter that DirectQuery fact table (e.g., by DueDateKey). Setting Date to Dual is a best practice in composite models: it behaves as Import for most queries (fast slicing, cached dimension) but can switch to DirectQuery behavior when needed to efficiently join/filter the DirectQuery Sales table. DirectQuery for Date would add unnecessary runtime queries for a small, stable dimension. Import-only can work, but Dual is preferred to minimize visual load times and avoid cross-storage penalties when interacting with DirectQuery facts.

파트 3:

Sales: ______

Sales should be DirectQuery because it must be refreshed in near real time. Import mode cannot satisfy near real-time requirements unless you use very frequent refresh (which is limited by capacity/licensing and still not truly real time). DirectQuery queries the source at report time, reflecting the latest data with minimal delay, which aligns with the requirement. Dual is not appropriate for a large fact table that needs real-time behavior; Dual is primarily intended for dimension tables in composite models, not for the main detailed transactional fact. While DirectQuery can slow visuals, the model mitigates this by using an imported aggregation table (SalesAggregate) and Dual dimensions to accelerate common queries.

파트 4:

SalesAggregate: ______

SalesAggregate should be Import. It only needs to refresh weekly, which aligns perfectly with scheduled refresh into the in-memory model. Import provides the best performance for visuals because VertiPaq can answer many queries quickly, and with the aggregations feature Power BI can hit the imported aggregate table for summarized queries instead of sending DirectQuery requests to the detailed Sales table. DirectQuery would not help because there is no near real-time requirement for the aggregate and it would reduce performance. Dual is unnecessary for a fact/aggregate table; Dual is mainly beneficial for shared dimensions. Therefore, Import best meets both requirements: weekly refresh and minimized visual load times.

9
문제 9

DRAG DROP - You plan to create a report that will display sales data from the last year for multiple regions. You need to restrict access to individual rows of the data on a per region-basis by using roles. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

파트 1:

Select the correct answer(s) in the image below.

question-image

A is correct because the required sequence for RLS is well-defined: 1) Import the data to Power BI Desktop (build the dataset/model). 2) Create a role definition (Modeling > Manage roles) using a DAX filter such as [Region] = "West" or a dynamic rule using USERPRINCIPALNAME() with a mapping table. 3) Publish the report (and dataset) to the Power BI service. 4) Assign users to the role in the Power BI service (Dataset > Security), ideally using Azure AD groups. The option “Add a filter to the report” is incorrect for this requirement because report filters are not security; they don’t prevent a user from accessing underlying rows if they have permissions, and they don’t enforce dataset-level restrictions. RLS roles are the correct mechanism when the question explicitly says “restrict access to individual rows… by using roles.”

10
문제 10

You have a Microsoft Power BI report. The size of PBIX file is 550 MB. The report is accessed by using an App workspace in shared capacity of powerbi.com. The report uses an imported dataset that contains one fact table. The fact table contains 12 million rows. The dataset is scheduled to refresh twice a day at 08:00 and 17:00. The report is a single page that contains 15 AppSource visuals and 10 default visuals. Users say that the report is slow to load the visuals when they access and interact with the report. You need to recommend a solution to improve the performance of the report. What should you recommend?

이동 중에도 모든 문제를 풀고 싶으신가요?

Cloud Pass를 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.

11
문제 11

HOTSPOT - You are creating a Microsoft Power BI imported data model to perform basket analysis. The goal of the analysis is to identify which products are usually bought together in the same transaction across and within sales territories. You import a fact table named Sales as shown in the exhibit. (Click the Exhibit tab.) SalesRowID | ProductKey | OrderDateKey | OrderDate | CustomerKey | SalesTerritoryKey | SalesOrderNumber | SalesOrderLineNumber | OrderQuantity | LineTotal | TaxAmt | Freight | LastModified | AuditID 1 | 310 | 20101229 | 2010-12-29 00:00:00.000 | 21768 | 6 | SO43697 | 1 | 1 | 3578.27 | 286.2616 | 89.4568 | 2011-01-10 00:00:00.000 | 127 2 | 346 | 20101229 | 2010-12-29 00:00:00.000 | 28389 | 7 | SO43698 | 1 | 1 | 3399.99 | 271.9992 | 84.9998 | 2011-01-10 00:00:00.000 | 127 3 | 346 | 20101229 | 2010-12-29 00:00:00.000 | 25863 | 1 | SO43699 | 1 | 1 | 3399.99 | 271.9992 | 84.9992 | 2011-01-10 00:00:00.000 | 127 4 | 336 | 20101229 | 2010-12-29 00:00:00.000 | 14501 | 4 | SO43700 | 1 | 1 | 699.0982 | 55.9279 | 17.4775 | 2011-01-10 00:00:00.000 | 127 5 | 346 | 20101229 | 2010-12-29 00:00:00.000 | 11003 | 9 | SO43701 | 1 | 1 | 3399.99 | 271.9992 | 84.9998 | 2011-01-10 00:00:00.000 | 127 6 | 311 | 20101230 | 2010-12-30 00:00:00.000 | 27645 | 4 | SO43702 | 1 | 1 | 3578.27 | 286.2616 | 89.4568 | 2011-01-11 00:00:00.000 | 127 7 | 310 | 20101230 | 2010-12-30 00:00:00.000 | 16624 | 9 | SO43703 | 1 | 1 | 3578.27 | 286.2616 | 89.4568 | 2011-01-11 00:00:00.000 | 127 The related dimension tables are imported into the model. Sales contains the data shown in the following table. Column name Data type Description SalesRowID Integer ID of the row from the source system, which represents a unique combination of SalesOrderNumber and SalesOrderLineNumber ProductKey Integer Surrogate key that relates to the product dimension OrderDateKey Integer Surrogate key that relates to the date dimension and is in the YYYYMMDD format OrderDate Datetime Date and time an order was processed CustomerKey Integer Surrogate key that relates to the customer dimension SalesTerritoryKey Integer Surrogate key that relates to the sales territory dimension SalesOrderNumber Text Unique identifier of an order SalesOrderLineNumber Integer Unique identifier of a line within an order OrderQuantity Integer Quantity of the product ordered LineTotal Decimal Total sales amount of a line before tax TaxAmt Decimal Amount of tax charged for the items on a specified line within an order Freight Decimal Amount of freight charged for the items on a specified line within an order LastModified Datetime The date and time that a row was last modified in the source system AuditID Integer The ID of the data load process that last updated a row You are evaluating how to optimize the model. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

파트 1:

The SalesRowID and AuditID columns can be removed from the model without impeding the analysis goals.

Yes. SalesRowID and AuditID are technical/operational columns that do not help identify products bought together. Basket analysis requires grouping lines into a transaction (SalesOrderNumber) and identifying the products in that transaction (ProductKey), then optionally slicing by territory/date/customer. SalesRowID is a source-system row identifier (unique combination of SalesOrderNumber and SalesOrderLineNumber). Since SalesOrderNumber and SalesOrderLineNumber already exist, SalesRowID is redundant for analysis and not needed for relationships. AuditID tracks the ETL/load process that last updated the row; it is useful for data engineering troubleshooting but not for analytical grouping, filtering, or measures related to co-purchase behavior. Removing both columns reduces model size and can improve refresh/query performance without impeding the stated analysis goals.

파트 2:

Both the OrderDateKey and OrderDate columns are necessary to perform the basket analysis.

No. Both OrderDateKey and OrderDate are not necessary for basket analysis. In a star schema, OrderDateKey is typically used to relate the Sales fact table to a Date dimension (which then provides year/quarter/month/day attributes for slicing). For basket analysis across time, that relationship is sufficient. The OrderDate (datetime) column is often redundant if you already have a proper Date dimension and do not need time-of-day granularity. Basket analysis is usually performed at the transaction level (SalesOrderNumber) and may be filtered by date, but that can be done via the Date dimension using OrderDateKey. You would keep OrderDate only if you specifically need the timestamp (hours/minutes) or if you lack a Date dimension/relationship. Given the prompt states related dimension tables are imported, OrderDateKey alone is enough.

파트 3:

The TaxAmt column must retain the current number of decimal places to perform the basket analysis.

No. TaxAmt does not need to retain the current number of decimal places to perform basket analysis. Basket analysis primarily evaluates which products co-occur in the same order; it relies on transaction identifiers and product identifiers, and sometimes quantities. Tax amount precision is not part of determining whether two products were bought together. Even if you were to include value-based metrics (e.g., total basket value), you could typically round currency values to a sensible precision (often 2 decimal places) without changing the co-occurrence results. In many basket-analysis models, TaxAmt is not required at all and could be removed to optimize the model. Therefore, retaining the exact current decimal precision is not a requirement for achieving the stated analysis goal.

12
문제 12

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are modeling data by using Microsoft Power BI. Part of the data model is a large Microsoft SQL Server table named Order that has more than 100 million records. During the development process, you need to import a sample of the data from the Order table. Solution: From Power Query Editor, you import the table and then add a filter step to the query. Does this meet the goal?

13
문제 13

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are modeling data by using Microsoft Power BI. Part of the data model is a large Microsoft SQL Server table named Order that has more than 100 million records. During the development process, you need to import a sample of the data from the Order table. Solution: You write a DAX expression that uses the FILTER function. Does this meet the goal?

14
문제 14

HOTSPOT - You have a Power BI report. You have the following tables. Name Description Balances The table contains daily records of closing balances for every active bank account. The closing balances appear for every day the account is live, including the last day. Date The table contains a record per day for the calendar years of 2000 to 2025. There is a hierarchy for financial year, quarter, month, and day. You have the following DAX measure. Accounts := CALCULATE ( DISTINCTCOUNT (Balances[AccountID]), LASTDATE ('Date'[Date]) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

파트 1:

A table visual that displays the date hierarchy at the year level and the [Accounts] measure will show the total number of accounts that were live throughout the year.

At the year level, the filter context includes all dates in that year. LASTDATE('Date'[Date]) returns only the final date of that year (e.g., 31-Dec for calendar year, or the last day of the financial year if the hierarchy uses financial year boundaries). CALCULATE then evaluates DISTINCTCOUNT(Balances[AccountID]) only for that single last date. So the visual will show the number of accounts that were live on the last day of the year, not the number of accounts that were live throughout the year. “Live throughout the year” would require accounts to have Balances rows for every day in the year (or to meet a start/end date condition spanning the whole year), which this measure does not test. Therefore, the statement is false.

파트 2:

A table visual that displays the date hierarchy at the month level and the [Accounts] measure will show the total number of accounts that were live throughout the month.

At the month level, the filter context includes all days in the selected month. LASTDATE returns the last day of that month in the current context. The measure then counts distinct accounts present in Balances on that last day only. That result is not “the total number of accounts that were live throughout the month.” It is “accounts live on the last day of the month.” Accounts that were active earlier in the month but closed before month-end will be excluded, and accounts opened mid-month but still active at month-end will be included (even though they were not live for the full month). To compute “throughout the month,” you’d need logic ensuring the account appears on every day of the month (or spans the full month). Hence, No.

파트 3:

A table visual that displays the date hierarchy at the day level and the [Accounts] measure will show the total number of accounts that were live that day.

At the day level, the filter context is already a single date (each row in the table visual corresponds to one day). In that context, LASTDATE('Date'[Date]) returns that same date. Therefore, CALCULATE applies a filter that effectively keeps the same single day. Because Balances contains a daily record for every account that is live on that day, DISTINCTCOUNT(Balances[AccountID]) for that date returns the number of accounts live that day. This matches the statement exactly. Therefore, Yes.

15
문제 15
(2개 선택)

You have a report that contains four pages. Each page contains slicers for the same four fields. Users report that when they select values in a slicer on one page, the selections are not persisted on other pages. You need to recommend a solution to ensure that users can select a value once to filter the results on all the pages. What are two possible recommendations to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

이동 중에도 모든 문제를 풀고 싶으신가요?

Cloud Pass를 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.

16
문제 16

DRAG DROP - You are using existing reports to build a dashboard that will be viewed frequently in portrait mode on mobile phones. You need to build the dashboard. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

파트 1:

Pin items from the reports to the dashboard.

Yes. A dashboard in Power BI is composed of tiles, and those tiles typically come from existing report visuals (or live pages) that you pin in the Power BI service. Since the requirement is to use existing reports to build a dashboard, pinning is the foundational step that actually creates the dashboard content. Without pinning items, there is nothing to arrange in the dashboard (including in the mobile layout). Why not No: If you skip pinning, you would only be editing an empty dashboard or not have a dashboard at all. Even if a dashboard already exists, you still need to pin the specific visuals you want users to see frequently on mobile. Pinning is therefore required to meet the scenario’s goal of building a mobile-consumable dashboard from existing reports.

파트 2:

Open the dashboard.

Yes. After pinning items (which creates/populates the dashboard), you must open the dashboard to access its settings and edit experiences, including the Mobile view. The mobile layout editor is accessed from within the dashboard in the Power BI service. Why not No: You cannot edit a dashboard’s mobile view without being in the context of that dashboard. While you can pin directly from a report visual and choose an existing/new dashboard, the step of opening the dashboard is necessary to proceed to mobile-specific layout editing and to validate the final portrait phone experience.

파트 3:

Create a phone layout for the existing reports.

No. Creating a phone layout for existing reports is a report-level optimization done in Power BI Desktop (View > Mobile layout). That feature controls how report pages render in the Power BI mobile app, not how a dashboard’s tiles are arranged on a phone. Why not Yes: The question is specifically about building a dashboard from existing reports and having that dashboard viewed frequently on mobile phones in portrait mode. For dashboards, the correct feature is editing the dashboard Mobile view in the Power BI service. Report phone layout could still be useful if users also open the underlying reports on mobile, but it is not required to build and optimize the dashboard itself, so it is not part of the necessary sequence.

파트 4:

Edit the Dashboard mobile view.

Yes. To optimize a dashboard for portrait mode on mobile phones, you use the dashboard’s Mobile view editor in the Power BI service. This provides a dedicated canvas representing a phone screen where you choose which pinned tiles appear and how they are arranged for mobile consumption. Why not No: If you only rely on the default dashboard layout, the mobile experience may be suboptimal (tiny tiles, poor ordering, excessive scrolling). The requirement explicitly calls out frequent viewing on mobile in portrait mode, which strongly implies you should tailor the Mobile view to prioritize key KPIs and improve usability.

파트 5:

Rearrange, resize, or remove items from the mobile layout.

Yes. After entering the dashboard Mobile view editor, you must rearrange, resize, or remove items to create an effective portrait phone layout. This is the step where you implement the actual mobile-first design: placing the most important KPIs at the top, resizing tiles for readability, and removing nonessential visuals to reduce scrolling. Why not No: Simply opening Mobile view without adjusting the layout does not meet the requirement to build a dashboard that is optimized for frequent mobile viewing. The exam expects you to recognize that mobile optimization is an explicit design activity, not an automatic outcome of pinning tiles.

17
문제 17

You have a Power BI report. The report contains a visual that shows gross sales by date. The visual has anomaly detection enabled. No anomalies are detected. You need to increase the likelihood that anomaly detection will identify anomalies in the report. What should you do?

18
문제 18
(2개 선택)

You have a Power BI query named Sales that imports the columns shown in the following table.

diagram

Users only use the date part of the Sales_Date field. Only rows with a Status of Finished are used in analysis. You need to reduce the load times of the query without affecting the analysis. Which two actions achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

19
문제 19

For the sales department at your company, you publish a Power BI report that imports data from a Microsoft Excel file located in a Microsoft SharePoint folder. The data model contains several measures. You need to create a Power BI report from the existing data. The solution must minimize development effort. Which type of data source should you use?

20
문제 20

DRAG DROP - You receive revenue data that must be included in Microsoft Power BI reports. You preview the data from a Microsoft Excel source in Power Query as shown in the following exhibit. Column1 Column2 Column3 Column4 Column5 Column6 Valid: 100% Valid: 100% Valid: 100% Valid: 100% Valid: 100% Valid: 100% Error: 0% Error: 0% Error: 0% Error: 0% Error: 0% Error: 0% Empty: 0% Empty: 0% Empty: 0% Empty: 0% Empty: 0% Empty: 0%

Department Product 2016 2017 2018 2019 Bikes Carbon mountainbike 1002815 1006617 1007814 1007239 Bikes Aluminium road bike 1007024 1001454 1005842 1007105 Bikes Touring bike 1003676 1005171 1001669 1003244 Accessories Bell 76713 10247 60590 52927 Accessories Bottle holder 26690 29613 67955 71466 Accessories Satnav 83189 40113 71684 24697 Accessories Mobilephone holder 68641 80336 58099 45706 You plan to create several visuals from the data, including a visual that shows revenue split by year and product. You need to transform the data to ensure that you can build the visuals. The solution must ensure that the columns are named appropriately for the data that they contain. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

파트 1:

Select the correct answer(s) in the image below.

question-image

Pass is appropriate because the transformation sequence is determinable from the scenario and the listed actions. The required end state is a long table with columns like Department, Product, Year, Revenue. To get there, you must (1) promote the first row to headers so the year columns are correctly named (2016, 2017, 2018, 2019) rather than Column1–Column6; (2) select Department and Product and use Unpivot Other Columns to convert all year columns into rows (this is preferred over Unpivot Columns because it automatically includes any additional year columns that may appear later); and (3) rename Attribute to Year and Value to Revenue so the columns are semantically correct for modeling and visuals. Therefore, the correct response is that you know the answer (Pass).

모의고사

Practice Test #1

50 문제·100분·합격 700/1000

Practice Test #2

50 문제·100분·합격 700/1000

Practice Test #3

50 문제·100분·합격 700/1000

Practice Test #4

50 문제·100분·합격 700/1000

Practice Test #5

50 문제·100분·합격 700/1000

다른 Microsoft 자격증

Microsoft AI-102

Microsoft AI-102

Associate

Microsoft AI-900

Microsoft AI-900

Fundamentals

Microsoft SC-200

Microsoft SC-200

Associate

Microsoft AZ-104

Microsoft AZ-104

Associate

Microsoft AZ-900

Microsoft AZ-900

Fundamentals

Microsoft SC-300

Microsoft SC-300

Associate

Microsoft DP-900

Microsoft DP-900

Fundamentals

Microsoft SC-900

Microsoft SC-900

Fundamentals

Microsoft AZ-305

Microsoft AZ-305

Expert

Microsoft AZ-204

Microsoft AZ-204

Associate

Microsoft AZ-500

Microsoft AZ-500

Associate

지금 학습 시작하기

Cloud Pass를 다운로드하여 모든 PL-300: Microsoft Power BI Data Analyst 기출 문제를 풀어보세요.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT 자격증 문제풀이 앱

Get it on Google PlayDownload on the App Store

자격증

AWSGCPMicrosoftCiscoCompTIADatabricks

법률

FAQ개인정보 처리방침서비스 약관

회사

문의하기계정 삭제

© Copyright 2026 Cloud Pass, All rights reserved.

이동 중에도 모든 문제를 풀고 싶으신가요?

앱 받기

Cloud Pass를 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.