This page was exported from Top Exam Collection [ http://blog.topexamcollection.com ] Export date:Sun Jan 19 20:37:43 2025 / +0000 GMT ___________________________________________________ Title: Ace CKAD Certification with 33 Actual Questions [Q10-Q25] --------------------------------------------------- Ace CKAD Certification with 33 Actual Questions PASS Linux Foundation CKAD EXAM WITH UPDATED DUMPS QUESTION 10Task:Update the Deployment app-1 in the frontend namespace to use the existing ServiceAccount app. See the solution below.ExplanationSolution:Text Description automatically generatedQUESTION 11Task:A Dockerfile has been prepared at -/human-stork/build/Dockerfile1) Using the prepared Dockerfile, build a container image with the name macque and lag 3.0. You may install and use the tool of your choice.2) Using the tool of your choice export the built container image in OC-format and store it at -/human stork/macque 3.0 tar See the solution below. ExplanationSolution:QUESTION 12ContextContextA project that you are working on has a requirement for persistent data to be available.TaskTo facilitate this, perform the following tasks:* Create a file on node sk8s-node-0 at /opt/KDSP00101/data/index.html with the content Acct=Finance* Create a PersistentVolume named task-pv-volume using hostPath and allocate 1Gi to it, specifying that the volume is at /opt/KDSP00101/data on the cluster’s node. The configuration should specify the access mode of ReadWriteOnce . It should define the StorageClass name exam for the PersistentVolume , which will be used to bind PersistentVolumeClaim requests to this PersistenetVolume.* Create a PefsissentVolumeClaim named task-pv-claim that requests a volume of at least 100Mi and specifies an access mode of ReadWriteOnce* Create a pod that uses the PersistentVolmeClaim as a volume with a label app: my-storage-app mounting the resulting volume to a mountPath /usr/share/nginx/html inside the pod Solution:QUESTION 13ContextTaskYou are required to create a pod that requests a certain amount of CPU and memory, so it gets scheduled to-a node that has those resources available.* Create a pod named nginx-resources in the pod-resources namespace that requests a minimum of 200m CPU and 1Gi memory for its container* The pod should use the nginx image* The pod-resources namespace has already been created Solution:QUESTION 14Task:1) Fix any API depreciation issues in the manifest file -/credible-mite/www.yaml so that this application can be deployed on cluster K8s.2) Deploy the application specified in the updated manifest file -/credible-mite/www.yaml in namespace cobra See the solution below. ExplanationSolution:Text Description automatically generatedText Description automatically generatedQUESTION 15Refer to Exhibit.Task:Create a Deployment named expose in the existing ckad00014 namespace running 6 replicas of a Pod.Specify a single container using the ifccncf/nginx: 1.13.7 image Add an environment variable named NGINX_PORT with the value 8001 to the container then expose port 8001 Solution:QUESTION 16Exhibit:TaskYou have rolled out a new pod to your infrastructure and now you need to allow it to communicate with the web and storage pods but nothing else. Given the running pod kdsn00201 -newpod edit it to use a network policy that will allow it to send and receive traffic only to and from the web and storage pods.  Pending QUESTION 17ContextTask:1) Create a secret named app-secret in the default namespace containing the following single key-value pair:Key3: value12) Create a Pod named ngnix secret in the default namespace.Specify a single container using the nginx:stable image.Add an environment variable named BEST_VARIABLE consuming the value of the secret key3. Solution:QUESTION 18Exhibit:ContextA user has reported an aopticauon is unteachable due to a failing livenessProbe .TaskPerform the following tasks:* Find the broken pod and store its name and namespace to /opt/KDOB00401/broken.txt in the format:The output file has already been created* Store the associated error events to a file /opt/KDOB00401/error.txt, The output file has already been created. You will need to use the -o wide output specifier with your command* Fix the issue.  Solution:Create the Pod:kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “gcr.io/google_containers/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “gcr.io/google_containers/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382eAfter 35 seconds, view the Pod events again:kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open ‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the Container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 m  Solution:Create the Pod:kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “gcr.io/google_containers/busybox”kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image “gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image “gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open ‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the Container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 m QUESTION 19TaskA deployment is falling on the cluster due to an incorrect image being specified. Locate the deployment, and fix the problem. See the solution belowExplanationcreate deploy hello-deploy –image=nginx –dry-run=client -o yaml > hello-deploy.yaml Update deployment image to nginx:1.17.4: kubectl set image deploy/hello-deploy nginx=nginx:1.17.4QUESTION 20Exhibit:ContextYour application’s namespace requires a specific service account to be used.TaskUpdate the app-a deployment in the production namespace to run as the restrictedservice service account. The service account has already been created.  Solution:  Solution: QUESTION 21Task:A pod within the Deployment named buffale-deployment and in namespace gorilla is logging errors.1) Look at the logs identify errors messages.Find errors, including User “system:serviceaccount:gorilla:default” cannot list resource “deployment” […] in the namespace “gorilla”2) Update the Deployment buffalo-deployment to resolve the errors in the logs of the Pod.The buffalo-deployment ‘S manifest can be found at -/prompt/escargot/buffalo-deployment.yaml See the solution below. ExplanationSolution:Text Description automatically generatedText Description automatically generatedText Description automatically generatedText Description automatically generatedQUESTION 22Exhibit:ContextYou have been tasked with scaling an existing deployment for availability, and creating a service to expose the deployment within your infrastructure.TaskStart with the deployment named kdsn00101-deployment which has already been deployed to the namespace kdsn00101 . Edit it to:* Add the func=webFrontEnd key/value label to the pod template metadata to identify the pod for the service definition* Have 4 replicasNext, create ana deploy in namespace kdsn00l01 a service that accomplishes the following:* Exposes the service on TCP port 8080* is mapped to me pods defined by the specification of kdsn00l01-deployment* Is of type NodePort* Has a name of cherry  Solution:  Solution: QUESTION 23ContextA web application requires a specific version of redis to be used as a cache.TaskCreate a pod with the following characteristics, and leave it running when complete:* The pod must run in the web namespace.The namespace has already been created* The name of the pod should be cache* Use the Ifccncf/redis image with the 3.2 tag* Expose port 6379 See the solution below.ExplanationSolution:QUESTION 24ContextTask:A pod within the Deployment named buffale-deployment and in namespace gorilla is logging errors.1) Look at the logs identify errors messages.Find errors, including User “system:serviceaccount:gorilla:default” cannot list resource “deployment” […] in the namespace “gorilla”2) Update the Deployment buffalo-deployment to resolve the errors in the logs of the Pod.The buffalo-deployment ‘S manifest can be found at -/prompt/escargot/buffalo-deployment.yaml Solution:QUESTION 25ContextA user has reported an aopticauon is unteachable due to a failing livenessProbe .TaskPerform the following tasks:* Find the broken pod and store its name and namespace to /opt/KDOB00401/broken.txt in the format:The output file has already been created* Store the associated error events to a file /opt/KDOB00401/error.txt, The output file has already been created. You will need to use the -o wide output specifier with your command* Fix the issue. See the solution below.ExplanationSolution:Create the Pod:kubectl create-f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yamlWithin 30 seconds, view the Pod events:kubectl describe pod liveness-execThe output indicates that no liveness probes have failed yet:FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————- ——– —— ——-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker023s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image“gcr.io/google_containers/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image“gcr.io/google_containers/busybox”23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id86849c15382e; Security:[seccomp=unconfined]23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id86849c15382eAfter 35 seconds, view the Pod events again:kubectl describe pod liveness-execAt the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.FirstSeen LastSeen Count From SubobjectPath Type Reason Message——— ——– —– —- ————-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker036s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image“gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image“gcr.io/google_containers/busybox”36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id86849c15382e; Security:[seccomp=unconfined]36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id86849c15382e2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can’t open‘/tmp/healthy’: No such file or directoryWait another 30 seconds, and verify that the Container has been restarted:kubectl get pod liveness-execThe output shows that RESTARTS has been incremented:NAME READY STATUS RESTARTS AGEliveness-exec 1/1 Running 1 m Loading … CKAD Questions PDF [2023] Use Valid New dump to Clear Exam: https://www.topexamcollection.com/CKAD-vce-collection.html --------------------------------------------------- Images: https://blog.topexamcollection.com/wp-content/plugins/watu/loading.gif https://blog.topexamcollection.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-02-21 15:21:13 Post date GMT: 2023-02-21 15:21:13 Post modified date: 2023-02-21 15:21:13 Post modified date GMT: 2023-02-21 15:21:13