This page was exported from Top Exam Collection [ http://blog.topexamcollection.com ] Export date:Fri Apr 4 6:07:35 2025 / +0000 GMT ___________________________________________________ Title: Pass Linux Foundation CKA Exam Quickly With TopExamCollection [Q58-Q82] --------------------------------------------------- Pass Linux Foundation CKA Exam Quickly With TopExamCollection Prepare CKA Question Answers - CKA Exam Dumps The CKA certification is highly regarded in the industry, and it is recognized as a standard for measuring the skills of Kubernetes administrators. Certified Kubernetes Administrator (CKA) Program Exam certification provides a way for professionals to demonstrate their expertise to employers and clients, and it can help them advance their careers in the field of Kubernetes administration. The CKA certification is also a prerequisite for other Kubernetes certifications offered by the Linux Foundation, such as the Certified Kubernetes Application Developer (CKAD) certification. The CKA exam is a performance-based exam that tests the practical skills of individuals in managing and deploying Kubernetes clusters. CKA exam consists of a series of hands-on tasks that are designed to simulate real-world scenarios. The tasks are designed to test the ability of individuals to deploy, manage, and troubleshoot Kubernetes clusters, as well as to configure networking, security, and storage. CKA exam is conducted online and can be taken from anywhere in the world.   QUESTION 58Create a deployment as follows:Name: nginx-appUsing container nginx with version 1.11.10-alpineThe deployment should contain 3 replicasNext, deploy the application with new version 1.11.13-alpine, by performing a rolling update.Finally, rollback that update to the previous version 1.11.10-alpine. solutionQUESTION 59Create a pod as follows:* Name: mongo* Using Image: mongo* In a new Kubernetes namespace named See the solution below.ExplanationsolutionQUESTION 60Create a deployment spec file that will:Launch 7 replicas of the nginx Image with the labelapp_runtime_stage=dev deployment name: kual00201 Save a copy of this spec file to /opt/KUAL00201/spec_deployment.yaml (or /opt/KUAL00201/spec_deployment.json).When you are done, clean up (delete) any new Kubernetes API object that you produced during this task. See the solution below.ExplanationsolutionF:WorkData Entry WorkData Entry20200827CKA10 B.JPGF:WorkData Entry WorkData Entry20200827CKA10 C.JPGQUESTION 61Scale the deployment from 5 replicas to 20 replicas and verify kubectl scale deploy webapp –replicas=20 kubectl get deploy webapp kubectl get po -l app=webappQUESTION 62Clean the cluster by deleting deployment and hpa you just created kubectl delete deploy webapp kubectl delete hpa webappQUESTION 63Score:7%TaskCreate a new PersistentVolumeClaim* Name: pv-volume* Class: csi-hostpath-sc* Capacity: 10MiCreate a new Pod which mounts the PersistentVolumeClaim as a volume:* Name: web-server* Image: nginx* Mount path: /usr/share/nginx/htmlConfigure the new Pod to have ReadWriteOnce access on the volume.Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change. Solution:vi pvc.yamlstorageclass pvcapiVersion: v1kind: PersistentVolumeClaimmetadata:name: pv-volumespec:accessModes:– ReadWriteOncevolumeMode: Filesystemresources:requests:storage: 10MistorageClassName: csi-hostpath-sc# vi pod-pvc.yamlapiVersion: v1kind: Podmetadata:name: web-serverspec:containers:– name: web-serverimage: nginxvolumeMounts:– mountPath: “/usr/share/nginx/html”name: my-volumevolumes:– name: my-volumepersistentVolumeClaim:claimName: pv-volume# craetekubectl create -f pod-pvc.yaml#editkubectl edit pvc pv-volume –recordQUESTION 64Create a snapshot of the etcd instance running at https://127.0.0.1:2379, saving the snapshot to the file path/srv/data/etcd-snapshot.db.The following TLS certificates/key are supplied for connecting to the server with etcdctl:* CA certificate: /opt/KUCM00302/ca.crt* Client certificate: /opt/KUCM00302/etcd-client.crt* Client key: Topt/KUCM00302/etcd-client.key See the solution below.ExplanationsolutionQUESTION 65Configure the kubelet systemd- managed service, on the node labelled with name=wk8s-node-1, to launch a pod containing a single container of Image httpd named webtool automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.You can ssh to the appropriate node using:[student@node-1] $ ssh wk8s-node-1You can assume elevated privileges on the node with the following command:[student@wk8s-node-1] $ | sudo -i See the solution below.ExplanationsolutionQUESTION 66Get list of all the pods showing name and namespace with a jsonpath expression. See the solution below.Explanationkubectl get pods -o=jsonpath=”{.items[*][‘metadata.name’, ‘metadata.namespace’]}”QUESTION 67How can an administrator configure the NGFW to automatically quarantine a device using Global Protect?  by adding the device’s Host ID to a quarantine list and configure GlobalProtect to prevent users from connecting to the GlobalProtect gateway from a quarantined device.  There is no native auto-quarantine feature so a custom script would need to be leveraged.  by using security policies, log forwarding profiles, and log settings.  by exporting the list of quarantined devices to a pdf or csv file by selecting PDF/CSV at the bottom of the Device Quarantine page and leveraging the appropriate XSOAR playbook. QUESTION 68You are tasked with setting up fine-grained access control for a Kubernetes cluster running a microservices application. You need to ensure that developers can only access the resources related to their specific microservices while preventing them from accessing or modifying other services’ resources. Define RBAC roles and permissions to achieve this, including details of the resources, verbs, and namespaces involved. Consider the following: See the solution below with Step by Step Explanation.Explanation:Specify the YAML configurations for roles, role bindings, and service accounts to enable the required access control, ensuring developers only have access to their respective microservice’s resources within their assigned namespaces. Solution (Step by Step) : 1. Define Roles:2. Create Service Accounts: apiVersion: vl kind: ServiceAccount metadata: name: order-service-sa namespace: order-service-ns — apiVersion: vl kind: ServiceAccount metadata: name: payment-service-sa namespace: payment-service-ns — apiVersion: vl kind: ServiceAccount metadata: name: inventory-service-sa namespace: inventory-service-ns 3. Bind Roles to Service Accounts: — apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: order-service-dev-binding namespace: order-service-ns roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: order-service-dev subjects: – kind: ServiceAccount name: order-service-sa namespace: order-service-ns — apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: payment-service-dev-binding namespace: payment-service-ns roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: payment-service-dev subjects: – kind: ServiceAccount name: payment-service-sa namespace: payment-service-ns — apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: inventory-service-dev-binding namespace: inventory-service-ns roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: inventory-service-dev subjects: – kind: ServiceAccount name: inventory-service-sa namespace: inventory-service-ns 4. Assign Service Accounts to Users: This step requires external authentication mechanisms like OIDC or LDAP. Assuming you have these mechanisms set up, you can associate the service accounts with specific users (‘john.doe@example.com’ , ‘jane.doe@example.com’, and ‘peter.pan@example.com’) using the configured authentication provider. Roles: Define the specific permissions for each microservice developer within their respective namespaces. The roles allow developers to access resources like Pods, Deployments, Services, ConfigMaps, and Secrets related to their assigned microservice. Service Accounts: Service accounts are created in each namespace for each microservice, representing the identity of the developer group. Role Bindings: Role bindings connect the defined roles with the service accounts, granting the associated permissions. User Association: This step connects the service accounts with individual developers through external authentication mechanisms, enabling them to utilize the assigned permissions. By following these steps, you ensure that developers can only access and manage resources associated with their respective microservices within their assigned namespaces. This fine-grained access control policy effectively restricts access and prevents developers from interfering with other microservices or resources. ,QUESTION 69Create the nginx pod with version 1.17.4 and expose it on port 80 kubectl run nginx –image=nginx:1.17.4 –restart=Never — port=80QUESTION 70You are running a critical application on Kubernetes that requires high availability. To ensure the application stays operational even if one or more nodes experience failures, you decide to implement a pod anti-affinity rule. Explain how you can configure an anti-affinity rule to prevent pods from being scheduled on the same node. See the solution below with Step by Step Explanation.Explanation:Solution (Step by Step) :1 . Define the Anti-Affinity Rule: Add an ‘affinity’ section to the ‘spec.template.spec’ of your Deployment or StatefulSet. Within , define a podAntiAffinity’ section, specifying that pods with the same label should not be placed on the same node.2. Use ‘requiredDuringSchedulinglgnoredDuringExecution’: The ‘requiredDuringSchedulinglgnoredDuringExecutioru section ensures that the rule is enforced during pod scheduling. Once a pod is scheduled, the rule is ignored. This ensures that even if a node fails, the remaining pods are not affected. 3. Set ‘topologyKey’: The ‘topologyKey’ is set to ‘kubernetes.io/hostname’. This tells Kubernetes to consider the node’s hostname for pod placement. It will prevent pods with the label ‘app: my-critical-app’ from being scheduled on the same node. 4. Verify the Deployment: Apply the YAML file to your cluster using ‘kubectl apply -f my-critical-app.yaml’. You can then check the status of your Deployment using ‘kubectl get pods -l app=my-critical-app’ to verify that the pods are distributed across different nodes.QUESTION 71Set the node named ek8s-node-1 as unavailable and reschedule all the pods running on it. See the solution below.ExplanationsolutionF:WorkData Entry WorkData Entry20200827CKA19 B.JPGQUESTION 72Create a deployment as follows:Name: nginx-appUsing container nginx with version 1.11.10-alpineThe deployment should containNext, deploy the application with new version , by performing a rolling update.Finally, rollback that update to the previous version See the solution below.ExplanationsolutionF:WorkData Entry WorkData Entry20200827CKA7 B.JPGF:WorkData Entry WorkData Entry20200827CKA7 C.JPGF:WorkData Entry WorkData Entry20200827CKA7 D.JPGQUESTION 73List “nginx-dev” and “nginx-prod” pod and delete those pods kubect1 get pods -o widekubectl delete po “nginx-dev” kubectl delete po “nginx-prod”QUESTION 74You have a Deployment named ‘worker-deployment’ that runs a set of worker Pods. You need to configure a PodDisruptionBudget (PDB) for this deployment, ensuring that at least 60% of the worker Pods are always available, even during planned or unplanned disruptions. How can you achieve this? See the solution below with Step by Step Explanation.Explanation:Solution (Step by Step) :1. PDB YAML Definition:2. Explanation: – ‘apiVersion: policy/vl ‘ : Specifies the API version for PodDisruptionBudget resources. – ‘kind: PodDisruptionBudget’: Specifies that this is a PodDisruptionBudget resource. – ‘metadata.name: worker-pdb”: Sets the name of the PDB. – ‘spec.selector.matchLabels: app: worker’: This selector targets the Pods labeled with ‘app: worker’ , ensuring the PDB applies to the ‘worker-deployment’ Pods. – ‘spec.minAvailable: 60%’: Specifies that at least 60% of the total worker Pods must remain available during disruptions. This means that if your deployment has 5 replicas, at least 3 Pods must remain running. 3. How it works: – The ‘minAvailable’ field in the PDB can be specified as a percentage of the total number of Pods in the deployment or as an absolute number of Pods. In this case, we are using a percentage (‘600/0’) to ensure a flexible approach to maintaining availability, even if the number of replicas changes. 4. Implementation: – Apply the YAML using ‘kubectl apply -f worker-pdb.yaml’ 5. Verification: You can verify the PDB’s effectiveness by trying to delete Pods or simulating a node failure. The scheduler will prevent actions that would violate the ‘minAvailable’ constraint, ensuring that at least 60% of the worker Pods remain available.QUESTION 75Score: 13%TaskA Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent. See the solution below.ExplanationSolution:sudo -isystemctl status kubeletsystemctl start kubeletsystemctl enable kubeletQUESTION 76You have a Kubernetes cluster with two worker nodes and a single Nginx service deployed. You want to expose this service externally using a LoadBalancer service type but only want traffic to be directed to pods on a specific worker node. How would you achieve this? See the solution below with Step by Step Explanation.Explanation:Solution (Step by Step) :1. Create a Node Selector:– Create a Node Selector label on the worker node where you want to host the Nginx pods.– Example:– Apply this configuration using ‘kubectl apply -f node-config.yaml’. 2. Configure the Deployment: – Update the Nginx deployment to include the Node Selector label in its pod template. – Example:– Apply the updated deployment configuration using ‘kubectl apply -f nginx-deployment.yamr. 3. Create a LoadBalancer Service: – Create a LoadBalancer type service that selects the Nginx pods with the ‘app=nginx’ label. – Example:– Apply the service configuration using ‘kubectl apply -f nginx-service.yamP. 4. Verify the Deployment: – Confirm the deployment of the Nginx pods on the specified worker node using ‘kubectl get pods -l app=nginx -o wide’. – Check the LoadBalancer service’s external IP address using ‘kubectl get services nginx-service’. – Access the Nginx service using the external IP address. All traffic should be routed to the pods on the worker node with the ‘worker-type: nginx’ label. —QUESTION 77You have a deployment that runs multiple replicas of a web server application. You need to ensure that the Deployment always maintains at least 2 replicas available, even if one or more pods are deleted or become unavailable. How can you configure the Deployment to achieve this using the ‘maxUnavailable’ field in the ‘strategy.rollingUpdate’ section? See the solution below with Step by Step Explanation.Explanation:Solution (Step by Step) :1. Define the Deployment with maxUnavailable’: Define a Deployment YAML file with ‘replicas: 3’, indicating that you want three replicas of the web server application. Then, in the ‘strategy.rollinglJpdate’ section, set the ‘maxUnavailable’ field to ‘1’.2. Apply the Deployment: Apply the YAML file to your cluster using ‘kubectl apply -f my-web-server.yamr. The deployment will create three replicas of your web server application. 3. Test the ‘maxUnavailable’ Configuration: Delete or terminate one of the pods in the Deployment. The Deployment will automatically create a new pod to replace the deleted or unavailable one, ensuring that at least two replicas are always available. You can monitor the status of the deployment using ‘kubectl get pods -l app=my-web-server’. You should see that two pods are consistently running, while the third is being replaced.QUESTION 78Ensure a single instance of pod nginx is running on each node of the Kubernetes cluster where nginx also represents the Image name which has to be used. Do not override any taints currently in place.Use DaemonSet to complete this task and use ds-kusc00201 as DaemonSet name. solutionQUESTION 79Configure the kubelet systemd- managed service, on the node labelled with name=wk8s-node-1, to launch a pod containing a single container of Image httpd named webtool automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.You can ssh to the appropriate node using:[student@node-1] $ ssh wk8s-node-1You can assume elevated privileges on the node with the following command:[student@wk8s-node-1] $ | sudo -i solutionQUESTION 80Score: 7%TaskGiven an existing Kubernetes cluster running version 1.20.0, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.20.1.Be sure to drain the master node before upgrading it and uncordon it after the upgrade.You are also expected to upgrade kubelet and kubectl on the master node. See the solution below.ExplanationSOLUTION:[student@node-1] > ssh ek8skubectl cordon k8s-masterkubectl drain k8s-master –delete-local-data –ignore-daemonsets –force apt-get install kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00 –disableexcludes=kubernetes kubeadm upgrade apply 1.20.1 –etcd-upgrade=false systemctl daemon-reload systemctl restart kubelet kubectl uncordon k8s-masterQUESTION 81List all service account and create a service account called “admin”  kubectl get sakubectl get sa –all-namespaceskubectl create sa admin//Verifykubectl get sa admin -o yaml  kubectl get sakubectl get sa –all-namespaces//Verifykubectl get sa admin -o yaml QUESTION 82You have a Deployment with 5 replicas. You want to increase the number of replicas to 10, but only after ensuring that the new pods are healthy and ready to serve traffic. See the solution below with Step by Step Explanation.Explanation:Solution (Step by Step) :1. Update the Deployment YAML:– Update the ‘replicas’ field in the Deployment YAML to 10.2. Apply the Changes: – Apply the updated YAML file using ‘kubectl apply -f my-deployment.yaml’ 3. Monitor Pod Status: – Use ‘kubectl get pods -l app=my-app’ to monitor the status of the pods. – Ensure that the new pods are in the “Running’ state and have a ‘Ready’ status. 4. Check Liveness and Readiness Probes: – If applicable, ensure that liveness and readiness probes are configured to check the health of the pods. – This helps in identifying and restarting unhealthy pods. 5. Verify Service Availability: – Use ‘kubectl get services my-service” to check the service status. – Ensure that the service is still available and serving traffic. 6. Increase Replicas: – Once the new pods are healthy and ready, the deployment will automatically scale up to 10 replicas. Loading … Real Linux Foundation CKA Exam Questions [Updated 2025]: https://www.topexamcollection.com/CKA-vce-collection.html --------------------------------------------------- Images: https://blog.topexamcollection.com/wp-content/plugins/watu/loading.gif https://blog.topexamcollection.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2025-03-31 09:19:05 Post date GMT: 2025-03-31 09:19:05 Post modified date: 2025-03-31 09:19:05 Post modified date GMT: 2025-03-31 09:19:05