This page was exported from Top Exam Collection [ http://blog.topexamcollection.com ] Export date:Sun Apr 6 0:23:48 2025 / +0000 GMT ___________________________________________________ Title: Best Microsoft DP-500 2023 Training With 115 QA's [Q18-Q36] --------------------------------------------------- Best Microsoft DP-500 2023 Training With 115 QA's Microsoft DP-500 Certification Exam Questions Microsoft DP-500 Exam Syllabus Topics: TopicDetailsTopic 1Recommend appropriate file types for querying serverless SQL pools Commit code and artifacts to a source control repository in Azure Synapse AnalyticsTopic 2Explore and visualize data by using the Azure Synapse SQL results pane Deploy and manage datasets by using the XMLA endpointTopic 3Identify data loading performance bottlenecks in Power Query or data sources Integrate an existing Power BI workspace into Azure Synapse AnalyticsTopic 4Perform impact analysis of downstream dependencies from dataflows and datasets Manage Power BI assets by using Azure PurviewTopic 5Design and implement enterprise-scale row-level security and object-level security Analyze data model efficiency by using VertiPaq AnalyzerTopic 6Identify requirements for a solution, including features, performance, and licensing strategy Recommend and configure an on-premises gateway in Power BITopic 7Create queries, functions, and parameters by using the Power Query Advanced Editor Identify and implement performance improvements in queries and report visualsTopic 8Design and configure Power BI reports for accessibility Implement performance improvements in Power Query and data sourcesTopic 9Identify an appropriate Azure Synapse pool when analyzing data Design and build composite models, including aggregationsTopic 10Query advanced data sources, including JSON, Parquet, APIs, and Azure Machine Learning models Connect to and query datasets by using the XMLA endpointTopic 11Integrate an analytics platform into an existing IT infrastructure Create and distribute paginated reports in Power BI Report Builder   Q18. You are creating a Python visual in Power Bl Desktop.You need to retrieve the value of a column named Unit Price from a DataFrame.How should you reference the Unit Price column in the Python code?  pandas.DataFrame(‘Unit Price’)  dataset[‘Unit Price’]  data = [Unit Price]  (‘Unit Price’) You can retrieve a column in a pandas DataFrame object by using the DataFrame object name, followed by the label of the column name in brackets.So if the DataFrame object name is dataframe1 and the column we are trying to retrieve the ‘X’ column, then we retrieve the column using the statement, dataframe1[‘X’].Here’s a simple Python script that imports pandas and uses a data frame:import pandas as pddata = [[‘Alex’,10],[‘Bob’,12],[‘Clarke’,13]]df = pd.DataFrame(data,columns=[‘Name’,’Age’],dtype=float)print (df)When run, this script returns:Name Age0 Alex 10.01 Bob 12.02 Clarke 13.0Q19. You plan to generate a line chart to visualize and compare the last six months of sales data for twodepartments. You need to increase the accessibility of the visual. What should you do?  Replace long text with abbreviations and acronyms.  Configure a unique marker for each series.  Configure a distinct color for each series.  Move important information to a tooltip. Q20. You need to create Power BI reports that will display data based on the customers’ subscription level.Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. 1 – Create row-level security (RLS) roles2 – Create a DAX expression.3 – Add members to row-level security (RLS) rolesQ21. You are creating a Power Bl Desktop report.You add a Python visual to the report page.You plan to create a scatter chart to visualize the data.You add Python code to the Python script editor.You need to create the scatter chart.How should you complete the Python code? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Q22. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-encoded business names, survey names, and participant counts. The database is configured to use the default collation.The queries use open row set and infer the schema shown in the following table.You need to recommend changes to the queries to reduce I/O reads and tempdb usage.Solution: You recommend using openrowset with to explicitly define the collation for businessName and surveyName as Latim_Generai_100_BiN2_UTF8.Does this meet the goal?  Yes  No Query Parquet files using serverless SQL pool in Azure Synapse Analytics.ImportantEnsure you are using a UTF-8 database collation (for example Latin1_General_100_BIN2_UTF8) because string values in PARQUET files are encoded using UTF-8 encoding. A mismatch between the text encoding in the PARQUET file and the collation may cause unexpected conversion errors. You can easily change the default collation of the current database using the following T-SQL statement: alter database current collate Latin1_General_100_BIN2_UTF8′.Note: If you use the Latin1_General_100_BIN2_UTF8 collation you will get an additional performance boost compared to the other collations. The Latin1_General_100_BIN2_UTF8 collation is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the parquet files that will not contain data needed in the queries (file/column-segment pruning). If you use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering is happening within the SQL process. The Latin1_General_100_BIN2_UTF8 collation has additional performance optimization that works only for parquet and CosmosDB. The downside is that you lose fine-grained comparison rules like case insensitivity.Q23. You are creating a Power 81 single-page report.Some users will navigate the report by using a keyboard, and some users will navigate the report by using a screen reader.You need to ensure that the users can consume content on a report page in a logical order.What should you configure on the report page?  the bookmark order  the X position  the layer order  the tab order Tab order is the order in which users interact with the items on a page using the keyboard. Generally, we want tab order to be predictable and to closely match the visual order on the page (unless there is a good reason to deviate).Note: If you are using the keyboard to navigate in a Power BI report, the order in which you arrive at visuals will not follow your vision unless you set the new tab order property. If you have low or no vision, this becomes an even bigger issue because you may not be able to see that you are navigating visuals out of visual order because the screen reader just reads whatever comes next.Q24. You are planning a Power Bl solution for a customer.The customer will have 200 Power Bl users. The customer identifies the following requirements:* Ensure that all the users can create paginated reports.* Ensure that the users can create reports containing Al visuals.* Provide autoscaling of the CPU resources during heavy usage spikes.You need to recommend a Power Bl solution for the customer. The solution must minimize costs. What shouldyou recommend?  Power Bl Premium per user  a Power Bl Premium per capacity  Power Bl Pro per user  Power Bl Report Server Q25. You have a deployment pipeline for a Power BI workspace. The workspace contains two datasets that use import storage mode.A database administrator reports a drastic increase in the number of queries sent from the Power BI service to an Azure SQL database since the creation of the deployment pipeline.An investigation into the issue identifies the following:One of the datasets is larger than 1 GB and has a fact table that contains more than 500 million rows.When publishing dataset changes to development, test, or production pipelines, a refresh is triggered against the entire dataset.You need to recommend a solution to reduce the size of the queries sent to the database when the dataset changes are published to development, test, or production.What should you recommend?  Turn off auto refresh when publishing the dataset changes to the Power Bl service.  In the dataset. change the fact table from an import table to a hybrid table.  Enable the large dataset storage format for workspace.  Create a dataset parameter to reduce the fact table row count in the development and test pipelines. Hybrid tablesHybrid tables are tables with incremental refresh that can have both import and direct query partitions. During a clean deployment, both the refresh policy and the hybrid table partitions are copied. When deploying to a pipeline stage that already has hybrid table partitions, only the refresh policy is copied. To update the partitions, refresh the table.Refreshes are faster – Only the most recent data that has changed needs to be refreshed.Q26. You need to identify the root cause of the data refresh issue.What should you use?  the Usage Metrics Report in powerbi.com  Query Diagnostics in Power Query Editor  Performance analyzer in Power Bl Desktop Users indicate that the data in Power BI reports is stale. You discover that the refresh process of the Power BI model occasionally times out.With Query Diagnostics, you can achieve a better understanding of what Power Query is doing at authoring and at refresh time in Power BI Desktop. While we’ll be expanding on this feature in the future, including adding the ability to use it during full refreshes, at this time you can use it to understand what sort of queries you’re emitting, what slowdowns you might run into during authoring refresh, and what kind of background events are happening.Q27. You have a Power Bl dataset that has the query dependencies shown in the following exhibit.Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/powerquery-m/table-bufferQ28. You have an Azure Synapse Analytics serverless SQL pool and an Azure Data Lake Storage Gen2 account.You need to query all the files in the ‘csv/taxi/’ folder and all its subfolders. All the files are in CSV format and have a header row.How should you complete the query? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Q29. You need to recommend a solution for the customer workspaces to support the planned changes.Which two configurations should you include in the recommendation? Each correct answer presents part of thesolution.NOTE: Each correct selection is worth one point.  Set Use datasets across workspaces to Enabled  Publish the financial data to the web.  Grant the Build permission for the financial data to each customer.  Configure the FinData workspace to use a Power Bl Premium capacity. Q30. You have the following code in an Azure Synapse notebook.Use the drop-down menus to select the answer choice that completes each statement based on the informationpresented in the code.NOTE: Each correct selection is worth one point. Q31. You are using an Azure Synapse Analytics serverless SQL pool to query network traffic logs in the Apache Parquet format. A sample of the data is shown in the following table.You need to create a Transact-SQL query that will return the source IP address.Which function should you use in the select statement to retrieve the source IP address?  JS0N_VALUE  FOR.JSON  CONVERT  FIRST VALUE Q32. You have a Power BI workspace named Workspace1 that contains five dataflows.You need to configure Workspace1 to store the dataflows in an Azure Data Lake Storage Gen2 account.What should you do first?  Delete the dataflow queries.  From the Power Bl Admin portal, enable tenant-level storage.  Disable load for all dataflow queries.  Change the Data source settings in the dataflow queries. Configuring Azure connections is an optional setting with additional properties that can optionally be set:* Tenant Level storage, which lets you set a default, and/or* Workspace-level storage, which lets you specify the connection per workspace You can optionally configure tenant-level storage if you want to use a centralized data lake only, or want this to be the default option.Q33. You need to recommend a solution to ensure that sensitivity labels are applied. The solution must minimize administrative effort.Which three actions should you include in the recommendation? Each correct answer presents part of the solution.NOTE: Each correct selection is worth one point.  From the Power Bl Admin portal, set Allow users to apply sensitivity labels for Power Bl content to Enabled.  From the Power Bl Admin portal, set Apply sensitivity labels from data sources to their data in Power Bl to Enabled.  In SQLDW. apply sensitivity labels to the columns in the Customer and CustomersWithProductScore tables.  In the Power Bl datasets, apply sensitivity labels to the columns in the Customer and CustomersWithProductScore tables.  From the Power Bl Admin portal, set Make certified content discoverable to Enabled. A Synapse Analytics dedicated SQL pool is named SQLDW.Customer contact data in SQLDW and the Power BI dataset must be labeled as Sensitive. Records must be kept of any users that use the sensitive data.A (not B): Enable sensitivity labelsSensitivity labels must be enabled on the tenant before they can be used in both the service and in Desktop.To enable sensitivity labels on the tenant, go to the Power BI Admin portal, open the Tenant settings pane, and find the Information protection section.In the Information Protection section, perform the following steps:Open Allow users to apply sensitivity labels for Power BI content.Enable the toggle.D (not C): When data protection is enabled on your tenant, sensitivity labels appear in the sensitivity column in the list view of dashboards, reports, datasets, and dataflows.E: Power BI Tenant Discovery Setting include Make certified content discoverable.Reference:https://docs.microsoft.com/en-us/power-bi/enterprise/service-security-apply-data-sensitivity-labelshttps://support.nhs.net/knowledge-base/power-bi-guidance/Q34. You have a Power Bl dataset that contains two tables named Table1 and Table2. The dataset is used by one report.You need to prevent project managers from accessing the data in two columns in Table1 named Budget and Forecast.Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. 1 – From Power BI Desktop, create a role named Project Managers.2 – Open Tabular Editor3 – From Power BI Desktop, add a DAX filter to the Project Managers role.4 – For Table1, the Budget and Forecast columns, set the permissions to None.Q35. Note: This question is part of a series of questions that present the same scenario. Each question in the seriescontains a unique solution that might meet the stated goals. Some question sets might have more than onecorrect solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questionswill not appear in the review screen.You have the Power Bl data model shown in the exhibit. (Click the Exhibit tab.)Users indicate that when they build reports from the data model, the reports take a long time to load.You need to recommend a solution to reduce the load times of the reports.Solution: You recommend normalizing the data model.Does this meet the goal?  Yes  No Q36. You use Azure Synapse Analytics and Apache Spark notebooks to You need to use PySpark to gain access tothe visual libraries. Which Python libraries should you use?  Seaborn only  Matplotlib and Seaborn  Matplotlib only  Matplotlib and TensorFlow  TensorFlow only  Seaborn and TensorFlow  Loading … Quickly and Easily Pass Microsoft Exam with DP-500 real Dumps: https://www.topexamcollection.com/DP-500-vce-collection.html --------------------------------------------------- Images: https://blog.topexamcollection.com/wp-content/plugins/watu/loading.gif https://blog.topexamcollection.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-02-13 14:57:58 Post date GMT: 2023-02-13 14:57:58 Post modified date: 2023-02-13 14:57:58 Post modified date GMT: 2023-02-13 14:57:58