This page was exported from Top Exam Collection [ http://blog.topexamcollection.com ] Export date:Sun Jan 19 18:57:51 2025 / +0000 GMT ___________________________________________________ Title: [Jul-2023] Verified Microsoft Exam Dumps with DP-203 Exam Study Guide [Q93-Q108] --------------------------------------------------- [Jul-2023] Verified Microsoft Exam Dumps with DP-203 Exam Study Guide Best Quality Microsoft DP-203 Exam Questions TopExamCollection Realistic Practice Exams [2023] How to Register For Exam DP-203: Data Engineering on Microsoft Azure? Exam Register Link: https://examregistration.microsoft.com/?locale=en-us&examcode=DP-203&examname=Exam%20DP-203:%20Data%20Engineering%20on%20Microsoft%20Azure&returnToLearningUrl=https%3A%2F%2Fdocs.microsoft.com%2Flearn%2Fcertifications%2Fexams%2Fdp-203   NO.93 You need to design an analytical storage solution for the transactional dat a. The solution must meet the sales transaction dataset requirements.What should you include in the solution? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distributeNO.94 You have a Microsoft SQL Server database that uses a third normal form schema.You plan to migrate the data in the database to a star schema in an Azure Synapse Analytics dedicated SQI pool.You need to design the dimension tables. The solution must optimize read operations.What should you include in the solution? to answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationText, table Description automatically generatedBox 1: Denormalize to a second normal formDenormalization is the process of transforming higher normal forms to lower normal forms via storing the join of higher normal form relations as a base relation. Denormalization increases the performance in data retrieval at cost of bringing update anomalies to a database.Box 2: New identity columnsThe collapsing relations strategy can be used in this step to collapse classification entities into component entities to obtain at dimension tables with single-part keys that connect directly to the fact table. The single-part key is a surrogate key generated to ensure it remains unique over time.Example:Diagram Description automatically generatedNote: A surrogate key on a table is a column with a unique identifier for each row. The key is not generated from the table data. Data modelers like to create surrogate keys on their tables when they design data warehouse models. You can use the IDENTITY property to achieve this goal simply and effectively without affecting load performance.Reference:https://www.mssqltips.com/sqlservertip/5614/explore-the-role-of-normal-forms-in-dimensional-modeling/https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identityNO.95 You use Azure Data Factory to prepare data to be queried by Azure Synapse Analytics serverless SQL pools.Files are initially ingested into an Azure Data Lake Storage Gen2 account as 10 small JSON files. Each file contains the same data attributes and data from a subsidiary of your company.You need to move the files to a different folder and transform the data to meet the following requirements:Provide the fastest possible query times.Automatically infer the schema from the underlying files.How should you configure the Data Factory copy activity? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introductionhttps://docs.microsoft.com/en-us/azure/data-factory/format-parquetNO.96 You have an Azure subscription that contains an Azure Data Lake Storage account. The storage account contains a data lake named DataLake1.You plan to use an Azure data factory to ingest data from a folder in DataLake1, transform the data, and land the data in another folder.You need to ensure that the data factory can read and write data from any folder in the DataLake1 file system.The solution must meet the following requirements:* Minimize the risk of unauthorized user access.* Use the principle of least privilege.* Minimize maintenance effort.How should you configure access to the storage account for the data factory? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationText Description automatically generated with low confidenceBox 1: Azure Active Directory (Azure AD)On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.Box 2: a managed identityA data factory can be associated with a managed identity for Azure resources, which represents this specific data factory. You can directly use this managed identity for Data Lake Storage Gen2 authentication, similar to using your own service principal. It allows this designated factory to access and copy data to or from your Data Lake Storage Gen2.Note: The Azure Data Lake Storage Gen2 connector supports the following authentication types.* Account key authentication* Service principal authentication* Managed identities for Azure resources authenticationReference:https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overviewhttps://docs.microsoft.com/en-us/azure/data-factory/connector-azure-data-lake-storageNO.97 You are building a database in an Azure Synapse Analytics serverless SQL pool.You have data stored in Parquet files in an Azure Data Lake Storege Gen2 container.Records are structured as shown in the following sample.{“id”: 123,“address_housenumber”: “19c”,“address_line”: “Memory Lane”,“applicant1_name”: “Jane”,“applicant2_name”: “Dev”}The records contain two applicants at most.You need to build a table that includes only the address fields.How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tablesNO.98 You have a Microsoft SQL Server database that uses a third normal form schema.You plan to migrate the data in the database to a star schema in an A?ire Synapse Analytics dedicated SQI pool.You need to design the dimension tables. The solution must optimize read operations.What should you include in the solution? to answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. NO.99 You are creating an Apache Spark job in Azure Databricks that will ingest JSON-formatted data.You need to convert a nested JSON string into a DataFrame that will contain multiple rows.Which Spark SQL function should you use?  explode  filter  coalesce  extract ExplanationConvert nested JSON to a flattened DataFrameYou can to flatten nested JSON, using only $”column.*” and explode methods.Note: Extract and flattenUse $”column.*” and explode methods to flatten the struct and array types before displaying the flattened DataFrame.Scaladisplay(DF.select($”id” as “main_id”,$”name”,$”batters”,$”ppu”,explode($”topping”)) // Exploding the topping column using explode as it is an array type withColumn(“topping_id”,$”col.id”) // Extracting topping_id from col using DOT form withColumn(“topping_type”,$”col.type”) // Extracting topping_tytpe from col using DOT form drop($”col”) select($”*”,$”batters.*”) // Flattened the struct type batters tto array type which is batter drop($”batters”) select($”*”,explode($”batter”)) drop($”batter”) withColumn(“batter_id”,$”col.id”) // Extracting batter_id from col using DOT form withColumn(“battter_type”,$”col.type”) // Extracting battter_type from col using DOT form drop($”col”) ) Reference: https://learn.microsoft.com/en-us/azure/databricks/kb/scala/flatten-nested-columns-dynamicallyNO.100 You use PySpark in Azure Databricks to parse the following JSON input.You need to output the data in the following tabular format.How should you complete the PySpark code? To answer, drag the appropriate values to he correct targets. Each value may be used once, more than once or not at all. You may need to drag the split bar between panes or scroll to view content.NOTE: Each correct selection is worth one point. NO.101 The storage account container view is shown in the Refdata exhibit. (Click the Refdata tab.)You need to configure the Stream Analytics job to pick up the new reference data.What should you configure?To answer, select the appropriate options in the answer areaNOTE: Each correct selection is worth one point. NO.102 You are designing a folder structure for the files m an Azure Data Lake Storage Gen2 account. The account has one container that contains three years of data.You need to recommend a folder structure that meets the following requirements:* Supports partition elimination for queries by Azure Synapse Analytics serverless SQL pooh* Supports fast data retrieval for data from the current month* Simplifies data security management by departmentWhich folder structure should you recommend?  YYYMMDDDepartmentDataSourceDataFile_YYYMMMDD.parquet  DepdftmentDataSourceYYYMMDataFile_YYYYMMDD.parquet  DDMMYYYYDepartmentDataSourceDataFile_DDMMYY.parquet  DataSourceDepartmentYYYYMMDataFile_YYYYMMDD.parquet NO.103 You have an Azure subscription that contains an Azure Data Lake Storage account. The storage account contains a data lake named DataLake1.You plan to use an Azure data factory to ingest data from a folder in DataLake1, transform the data, and land the data in another folder.You need to ensure that the data factory can read and write data from any folder in the DataLake1 file system. The solution must meet the following requirements:Minimize the risk of unauthorized user access.Use the principle of least privilege.Minimize maintenance effort.How should you configure access to the storage account for the data factory? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overviewhttps://docs.microsoft.com/en-us/azure/data-factory/connector-azure-data-lake-storageNO.104 You are designing the folder structure for an Azure Data Lake Storage Gen2 container.Users will query data by using a variety of services including Azure Databricks and Azure Synapse Analytics serverless SQL pools. The data will be secured by subject are a. Most queries will include data from the current year or current month.Which folder structure should you recommend to support fast queries and simplified folder security?  /{SubjectArea}/{DataSource}/{DD}/{MM}/{YYYY}/{FileData}_{YYYY}_{MM}_{DD}.csv  /{DD}/{MM}/{YYYY}/{SubjectArea}/{DataSource}/{FileData}_{YYYY}_{MM}_{DD}.csv  /{YYYY}/{MM}/{DD}/{SubjectArea}/{DataSource}/{FileData}_{YYYY}_{MM}_{DD}.csv  /{SubjectArea}/{DataSource}/{YYYY}/{MM}/{DD}/{FileData}_{YYYY}_{MM}_{DD}.csv There’s an important reason to put the date at the end of the directory structure. If you want to lock down certain regions or subject matters to users/groups, then you can easily do so with the POSIX permissions. Otherwise, if there was a need to restrict a certain security group to viewing just the UK data or certain planes, with the date structure in front a separate permission would be required for numerous directories under every hour directory. Additionally, having the date structure in front would exponentially increase the number of directories as time went on.Note: In IoT workloads, there can be a great deal of data being landed in the data store that spans across numerous products, devices, organizations, and customers. It’s important to pre-plan the directory layout for organization, security, and efficient processing of the data for down-stream consumers. A general template to consider might be the following layout:{Region}/{SubjectMatter(s)}/{yyyy}/{mm}/{dd}/{hh}/NO.105 You are developing a solution using a Lambda architecture on Microsoft Azure.The data at test layer must meet the following requirements:Data storage:*Serve as a repository (or high volumes of large files in various formats.*Implement optimized storage for big data analytics workloads.*Ensure that data can be organized using a hierarchical structure.Batch processing:*Use a managed solution for in-memory computation processing.*Natively support Scala, Python, and R programming languages.*Provide the ability to resize and terminate the cluster automatically.Analytical data store:*Support parallel processing.*Use columnar storage.*Support SQL-based languages.You need to identify the correct technologies to build the Lambda architecture.Which technologies should you use? To answer, select the appropriate options in the answer area NOTE: Each correct selection is worth one point. ExplanationData storage: Azure Data Lake StoreA key mechanism that allows Azure Data Lake Storage Gen2 to provide file system performance at object storage scale and prices is the addition of a hierarchical namespace. This allows the collection of objects/files within an account to be organized into a hierarchy of directories and nested subdirectories in the same way that the file system on your computer is organized. With the hierarchical namespace enabled, a storage account becomes capable of providing the scalability and cost-effectiveness of object storage, with file system semantics that are familiar to analytics engines and frameworks.Batch processing: HD Insight SparkAparch Spark is an open-source, parallel-processing framework that supports in-memory processing to boost the performance of big-data analysis applications.HDInsight is a managed Hadoop service. Use it deploy and manage Hadoop clusters in Azure. For batch processing, you can use Spark, Hive, Hive LLAP, MapReduce.Languages: R, Python, Java, Scala, SQLAnalytic data store: SQL Data WarehouseSQL Data Warehouse is a cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP).SQL Data Warehouse stores data into relational tables with columnar storage.References:https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-namespacehttps://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/batch-processinghttps://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-overview-what-isNO.106 You have an Azure Synapse Analytics dedicated SQL pool named SA1 that contains a table named Table1.You need to identify tables that have a high percentage of deleted rows. What should you run?A)B)C)D)  Option  Option  Option  Option NO.107 You are designing a monitoring solution for a fleet of 500 vehicles. Each vehicle has a GPS tracking device that sends data to an Azure event hub once per minute.You have a CSV file in an Azure Data Lake Storage Gen2 container. The file maintains the expected geographical area in which each vehicle should be.You need to ensure that when a GPS position is outside the expected area, a message is added to another event hub for processing within 30 seconds. The solution must minimize cost.What should you include in the solution? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-window-functionsNO.108 You have an Azure Synapse Analytics dedicated SQL pool that contains a large fact table. The table contains50 columns and 5 billion rows and is a heap.Most queries against the table aggregate values from approximately 100 million rows and return only two columns.You discover that the queries against the fact table are very slow.Which type of index should you add to provide the fastest query times?  nonclustered columnstore  clustered columnstore  nonclustered  clustered Clustered columnstore indexes are one of the most efficient ways you can store your data in dedicated SQL pool.Columnstore tables won’t benefit a query unless the table has more than 60 million rows.Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/best-practices-dedicated-sql-pool Loading … Microsoft DP-203 (Data Engineering on Microsoft Azure) Certification Exam is designed to assess the skills of data engineers who work with data on the Microsoft Azure platform. Data Engineering on Microsoft Azure certification exam is designed to evaluate a candidate's technical expertise in designing and implementing data storage solutions, managing and monitoring data processing, and developing and deploying data processing solutions on Azure.   Authentic Best resources for DP-203: https://www.topexamcollection.com/DP-203-vce-collection.html --------------------------------------------------- Images: https://blog.topexamcollection.com/wp-content/plugins/watu/loading.gif https://blog.topexamcollection.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-07-27 15:26:43 Post date GMT: 2023-07-27 15:26:43 Post modified date: 2023-07-27 15:26:43 Post modified date GMT: 2023-07-27 15:26:43