[May 15, 2022] CCA175 Exam Dumps – Try Best CCA175 Exam Questions – TopExamCollection [Q16-Q40]

May 15, 2022 0 Comments

Rate this post

[May 15, 2022] CCA175 Exam Dumps – Try Best CCA175 Exam Questions – TopExamCollection

Verified CCA175 exam dumps Q&As with Correct 96 Questions and Answers

Cloudera Certified Advanced Architect- Data Engineer Exam Technical review

Coding of CCA175 exam questions is sufficient to get success in the exam. Correct answers are being prepared in the most appropriate manner. Covers all the key points of CCA175 exam questions

What are the steps involved in taking the CCA Spark and Hadoop Developer (CCA175) Exam

There are various steps involved in taking the CCA Spark and Hadoop Developer (CCA175) Exam. These steps will help you to know how the exam will be given to you. These steps will also guide you to your success in the test. Formats of the exam are Computer based testing (CBT), Performance-Based hands-on testing (PBT) and Web-based testing (WBT). All formats might be used in the exam. Your success is guaranteed if you choose PBT or WBT format. Started with preparation materials helps you for success for the exam. Application is also available for the preparation materials. Queries of candidates are answered through these materials. Solutions to these problems will be helpful for you to get more confidence in the exam. Cloudera CCA175 exam dumps questions can also help you. This is needed for better understanding of the test. Confidence will increase when you get good marks in the CCA Spark and Hadoop Developer (CCA175) Exam prep materials.

Updated test materials are available in these preparation materials. Values and advantages of these materials will be helpful for you to get success in the exam. Check the requirements of the exam as per the Cloudera exam. Simulator for CCA Spark and Hadoop Developer (CCA175) Exam Prep Materials is available in these preparation materials. This simulator will be helpful to you to gain knowledge of the test. Loaded practices in the preparation materials will be helpful for you to gain knowledge of the test.

 

NEW QUESTION 16
CORRECT TEXT
Problem Scenario 60 : You have been given below code snippet.
val a = sc.parallelize(List(“dog”, “salmon”, “salmon”, “rat”, “elephant”}, 3} val b = a.keyBy(_.length) val c = sc.parallelize(List(“dog”,”cat”,”gnu”,”salmon”,”rabbit”,”turkey”,”woif”,”bear”,”bee”), 3) val d = c.keyBy(_.length) operation1
Write a correct code snippet for operationl which will produce desired output, shown below.
Array[(lnt, (String, String))] = Array((6,(salmon,salmon)), (6,(salmon,rabbit)),
(6,(salmon,turkey)), (6,(salmon,salmon)), (6,(salmon,rabbit)),
(6,(salmon,turkey)), (3,(dog,dog)), (3,(dog,cat)), (3,(dog,gnu)), (3,(dog,bee)), (3,(rat,dog)),
(3,(rat,cat)), (3,(rat,gnu)), (3,(rat,bee)))

NEW QUESTION 17
CORRECT TEXT
Problem Scenario 96 : Your spark application required extra Java options as below. –
XX:+PrintGCDetails-XX:+PrintGCTimeStamps
Please replace the XXX values correctly
./bin/spark-submit –name “My app” –master local[4] –conf spark.eventLog.enabled=talse –
-conf XXX hadoopexam.jar

NEW QUESTION 18
CORRECT TEXT
Problem Scenario 55 : You have been given below code snippet.
val pairRDDI = sc.parallelize(List( (“cat”,2), (“cat”, 5), (“book”, 4),(“cat”, 12))) val pairRDD2 = sc.parallelize(List( (“cat”,2), (“cup”, 5), (“mouse”, 4),(“cat”, 12))) operation1
Write a correct code snippet for operationl which will produce desired output, shown below.
Array[(String, (Option[lnt], Option[lnt]))] = Array((book,(Some(4},None)),
(mouse,(None,Some(4))), (cup,(None,Some(5))), (cat,(Some(2),Some(2)),
(cat,(Some(2),Some(12))), (cat,(Some(5),Some(2))), (cat,(Some(5),Some(12))),
(cat,(Some(12),Some(2))), (cat,(Some(12),Some(12)))J

NEW QUESTION 19
CORRECT TEXT
Problem Scenario 70 : Write down a Spark Application using Python, In which it read a file “Content.txt” (On hdfs) with following content. Do the word count and save the results in a directory called “problem85” (On hdfs)
Content.txt
Hello this is ABCTECH.com
This is XYZTECH.com
Apache Spark Training
This is Spark Learning Session
Spark is faster than MapReduce

NEW QUESTION 20
CORRECT TEXT
Problem Scenario 19 : You have been given following mysql database details as well as other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Now accomplish following activities.
1. Import departments table from mysql to hdfs as textfile in departments_text directory.
2. Import departments table from mysql to hdfs as sequncefile in departments_sequence directory.
3. Import departments table from mysql to hdfs as avro file in departments avro directory.
4. Import departments table from mysql to hdfs as parquet file in departments_parquet directory.

NEW QUESTION 21
CORRECT TEXT
Problem Scenario 30 : You have been given three csv files in hdfs as below.
EmployeeName.csv with the field (id, name)
EmployeeManager.csv (id, manager Name)
EmployeeSalary.csv (id, Salary)
Using Spark and its API you have to generate a joined output as below and save as a text tile (Separated by comma) for final distribution and output must be sorted by id.
ld,name,salary,managerName
EmployeeManager.csv
E01,Vishnu
E02,Satyam
E03,Shiv
E04,Sundar
E05,John
E06,Pallavi
E07,Tanvir
E08,Shekhar
E09,Vinod
E10,Jitendra
EmployeeName.csv
E01,Lokesh
E02,Bhupesh
E03,Amit
E04,Ratan
E05,Dinesh
E06,Pavan
E07,Tejas
E08,Sheela
E09,Kumar
E10,Venkat
EmployeeSalary.csv
E01,50000
E02,50000
E03,45000
E04,45000
E05,50000
E06,45000
E07,50000
E08,10000
E09,10000
E10,10000

NEW QUESTION 22
CORRECT TEXT
Problem Scenario 53 : You have been given below code snippet.
val a = sc.parallelize(1 to 10, 3)
operation1
b.collect
Output 1
Array[lnt] = Array(2, 4, 6, 8,10)
operation2
Output 2
Array[lnt] = Array(1,2, 3)
Write a correct code snippet for operation1 and operation2 which will produce desired output, shown above.

NEW QUESTION 23
CORRECT TEXT
Problem Scenario 16 : You have been given following mysql database details as well as other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish below assignment.
1. Create a table in hive as below.
create table departments_hive(department_id int, department_name string);
2. Now import data from mysql table departments to this hive table. Please make sure that data should be visible using below hive command, select” from departments_hive

NEW QUESTION 24
CORRECT TEXT
Problem Scenario 15 : You have been given following mysql database details as well as other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following activities.
1. In mysql departments table please insert following record. Insert into departments values(9999, ‘”Data Science”1);
2. Now there is a downstream system which will process dumps of this file. However, system is designed the way that it can process only files if fields are enlcosed in(‘) single quote and separate of the field should be (-} and line needs to be terminated by : (colon).
3. If data itself contains the ” (double quote } than it should be escaped by .
4. Please import the departments table in a directory called departments_enclosedby and file should be able to process by downstream system.

NEW QUESTION 25
CORRECT TEXT
Problem Scenario 43 : You have been given following code snippet.
val grouped = sc.parallelize(Seq(((1,”twoM), List((3,4), (5,6)))))
val flattened = grouped.flatMap {A =>
groupValues.map { value => B }
}
You need to generate following output.
Hence replace A and B
Array((1,two,3,4),(1,two,5,6))

NEW QUESTION 26
CORRECT TEXT
Problem Scenario 21 : You have been given log generating service as below.
startjogs (It will generate continuous logs)
tailjogs (You can check , what logs are being generated)
stopjogs (It will stop the log service)
Path where logs are generated using above service : /opt/gen_logs/logs/access.log
Now write a flume configuration file named flumel.conf , using that configuration file dumps logs in HDFS file system in a directory called flumel. Flume channel should have following property as well. After every 100 message it should be committed, use non-durable/faster channel and it should be able to hold maximum 1000 events
Solution :
Step 1 : Create flume configuration file, with below configuration for source, sink and channel.
#Define source , sink , channel and agent,
agent1 .sources = source1
agent1 .sinks = sink1
agent1.channels = channel1
# Describe/configure source1
agent1 .sources.source1.type = exec
agent1.sources.source1.command = tail -F /opt/gen logs/logs/access.log
## Describe sinkl
agentl .sinks.sinkl.channel = memory-channel
agentl .sinks.sinkl .type = hdfs
agentl .sinks.sink1.hdfs.path = flumel
agentl .sinks.sinkl.hdfs.fileType = Data Stream
# Now we need to define channell property.
agent1.channels.channel1.type = memory
agent1.channels.channell.capacity = 1000
agent1.channels.channell.transactionCapacity = 100
# Bind the source and sink to the channel
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
Step 2 : Run below command which will use this configuration file and append data in hdfs.
Start log service using : startjogs
Start flume service:
flume-ng agent -conf /home/cloudera/flumeconf -conf-file
/home/cloudera/flumeconf/flumel.conf-Dflume.root.logger=DEBUG,INFO,console
Wait for few mins and than stop log service.
Stop_logs

NEW QUESTION 27
CORRECT TEXT
Problem Scenario 58 : You have been given below code snippet.
val a = sc.parallelize(List(“dog”, “tiger”, “lion”, “cat”, “spider”, “eagle”), 2) val b = a.keyBy(_.length) operation1
Write a correct code snippet for operationl which will produce desired output, shown below.
Array[(lnt, Seq[String])] = Array((4,ArrayBuffer(lion)), (6,ArrayBuffer(spider)),
(3,ArrayBuffer(dog, cat)), (5,ArrayBuffer(tiger, eagle}}}

NEW QUESTION 28
CORRECT TEXT
Problem Scenario 61 : You have been given below code snippet.
val a = sc.parallelize(List(“dog”, “salmon”, “salmon”, “rat”, “elephant”), 3) val b = a.keyBy(_.length) val c = sc.parallelize(List(“dog”,”cat”,”gnu”,”salmon”,”rabbit”,”turkey”,”wolf”,”bear”,”bee”), 3) val d = c.keyBy(_.length) operationl
Write a correct code snippet for operationl which will produce desired output, shown below.
Array[(lnt, (String, Option[String]}}] = Array((6,(salmon,Some(salmon))),
(6,(salmon,Some(rabbit))),
(6,(salmon,Some(turkey))), (6,(salmon,Some(salmon))), (6,(salmon,Some(rabbit))),
(6,(salmon,Some(turkey))), (3,(dog,Some(dog))), (3,(dog,Some(cat))),
(3,(dog,Some(dog))), (3,(dog,Some(bee))), (3,(rat,Some(dogg)), (3,(rat,Some(cat)j),
(3,(rat.Some(gnu))). (3,(rat,Some(bee))), (8,(elephant,None)))

NEW QUESTION 29
CORRECT TEXT
Problem Scenario 41 : You have been given below code snippet.
val aul = sc.parallelize(List ((“a” , Array(1,2)), (“b” , Array(1,2)))) val au2 = sc.parallelize(List ((“a” , Array(3)), (“b” , Array(2))))
Apply the Spark method, which will generate below output.
Array[(String, Array[lnt])] = Array((a,Array(1, 2)), (b,Array(1, 2)), (a(Array(3)), (b,Array(2)))

NEW QUESTION 30
CORRECT TEXT
Problem Scenario 74 : You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.orders
table=retail_db.order_items
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Columns of order table : (orderjd , order_date , ordercustomerid, order status}
Columns of orderjtems table : (order_item_td , order_item_order_id ,
order_item_product_id,
order_item_quantity,order_item_subtotal,order_item_product_price)
Please accomplish following activities.
1. Copy “retaildb.orders” and “retaildb.orderjtems” table to hdfs in respective directory p89_orders and p89_order_items .
2. Join these data using orderjd in Spark and Python
3. Now fetch selected columns from joined data Orderld, Order date and amount collected on this order.
4. Calculate total order placed for each date, and produced the output sorted by date.

NEW QUESTION 31
CORRECT TEXT
Problem Scenario 9 : You have been given following mysql database details as well as other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following.
1. Import departments table in a directory.
2. Again import departments table same directory (However, directory already exist hence it should not overrride and append the results)
3. Also make sure your results fields are terminated by ‘|’ and lines terminated by ‘n

NEW QUESTION 32
CORRECT TEXT
Problem Scenario 59 : You have been given below code snippet.
val x = sc.parallelize(1 to 20)
val y = sc.parallelize(10 to 30) operationl
z.collect
Write a correct code snippet for operationl which will produce desired output, shown below.
Array[lnt] = Array(16,12, 20,13,17,14,18,10,19,15,11)

NEW QUESTION 33
CORRECT TEXT
Problem Scenario 62 : You have been given below code snippet.
val a = sc.parallelize(List(“dogM, “tiger”, “lion”, “cat”, “panther”, “eagle”), 2) val b = a.map(x => (x.length, x)) operation1
Write a correct code snippet for operationl which will produce desired output, shown below.
Array[(lnt, String)] = Array((3,xdogx), (5,xtigerx), (4,xlionx), (3,xcatx), (7,xpantherx),
(5,xeaglex))

NEW QUESTION 34
CORRECT TEXT
Problem Scenario 63 : You have been given below code snippet.
val a = sc.parallelize(List(“dog”, “tiger”, “lion”, “cat”, “panther”, “eagle”), 2) val b = a.map(x => (x.length, x)) operation1
Write a correct code snippet for operationl which will produce desired output, shown below.
Array[(lnt, String}] = Array((4,lion), (3,dogcat), (7,panther), (5,tigereagle))

NEW QUESTION 35
CORRECT TEXT
Problem Scenario 4: You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.categories
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Please accomplish following activities.
Import Single table categories (Subset data} to hive managed table , where category_id between 1 and 22

NEW QUESTION 36
CORRECT TEXT
Problem Scenario 87 : You have been given below three files
product.csv (Create this file in hdfs)
productID,productCode,name,quantity,price,supplierid
1 001,PEN,Pen Red,5000,1.23,501
1 002,PEN,Pen Blue,8000,1.25,501
1003,PEN,Pen Black,2000,1.25,501
1004,PEC,Pencil 2B,10000,0.48,502
1005,PEC,Pencil 2H,8000,0.49,502
1006,PEC,Pencil HB,0,9999.99,502
2001,PEC,Pencil 3B,500,0.52,501
2002,PEC,Pencil 4B,200,0.62,501
2003,PEC,Pencil 5B,100,0.73,501
2004,PEC,Pencil 6B,500,0.47,502
supplier.csv
supplierid,name,phone
501,ABC Traders,88881111
502,XYZ Company,88882222
503,QQ Corp,88883333
products_suppliers.csv
productID,supplierID
2001,501
2002,501
2003,501
2004,502
2001,503
Now accomplish all the queries given in solution.
Select product, its price , its supplier name where product price is less than 0.6 using
SparkSQL

NEW QUESTION 37
CORRECT TEXT
Problem Scenario 80 : You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.products
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Columns of products table : (product_id | product_category_id | product_name | product_description | product_price | product_image )
Please accomplish following activities.
1. Copy “retaildb.products” table to hdfs in a directory p93_products
2. Now sort the products data sorted by product price per category, use productcategoryid colunm to group by category

NEW QUESTION 38
CORRECT TEXT
Problem Scenario 69 : Write down a Spark Application using Python,
In which it read a file “Content.txt” (On hdfs) with following content.
And filter out the word which is less than 2 characters and ignore all empty lines.
Once doen store the filtered data in a directory called “problem84” (On hdfs)
Content.txt
Hello this is ABCTECH.com
This is ABYTECH.com
Apache Spark Training
This is Spark Learning Session
Spark is faster than MapReduce

NEW QUESTION 39
CORRECT TEXT
Problem Scenario 78 : You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.orders
table=retail_db.order_items
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Columns of order table : (orderid , order_date , order_customer_id, order_status)
Columns of ordeMtems table : (order_item_td , order_item_order_id ,
order_item_product_id,
order_item_quantity,order_item_subtotal,order_item_product_price)
Please accomplish following activities.
1. Copy “retail_db.orders” and “retail_db.order_items” table to hdfs in respective directory p92_orders and p92_order_items .
2. Join these data using order_id in Spark and Python
3. Calculate total revenue perday and per customer
4. Calculate maximum revenue customer

NEW QUESTION 40
CORRECT TEXT
Problem Scenario 39 : You have been given two files
spark16/file1.txt
1,9,5
2,7,4
3,8,3
spark16/file2.txt
1 ,g,h
2 ,i,j
3 ,k,l
Load these two tiles as Spark RDD and join them to produce the below results
(l,((9,5),(g,h)))
(2, ((7,4), (i,j))) (3, ((8,3), (k,l)))
And write code snippet which will sum the second columns of above joined results (5+4+3).


Cloudera CCA175 Test Engine PDF – All Free Dumps: https://www.topexamcollection.com/CCA175-vce-collection.html

Leave a Reply

Your email address will not be published. Required fields are marked *

Enter the text from the image below