Practice
Resources
Contests
Online IDE
New
Free Mock
Events New Scaler
Practice
Improve your coding skills with our resources
Contests
Compete in popular contests with top coders
logo
Events
Attend free live masterclass hosted by top tech professionals
New
Scaler
Explore Offerings by SCALER
exit-intent-icon

Download Interview guide PDF

Before you leave, take this Kubernetes Interview Questions interview guide with you.
Get a Free Personalized Career Roadmap
Answer 4 simple questions about you and get a path to a lucrative career
expand-icon Expand in New Tab
/ Interview Guides / Kubernetes Interview Questions

Kubernetes Interview Questions

Last Updated: Jan 03, 2024

Download PDF


Your requested download is ready!
Click here to download.
Certificate included
About the Speaker
What will you Learn?
Register Now

What is Kubernetes?

Kubernetes is a distributed open-source technology that helps us in scheduling and executing application containers within and across clusters. A Kubernetes cluster consists of two types of resources:

The Master => Coordinates all activities in the cluster, for example, => scheduling applications, maintaining applications' state, scaling applications, and rolling out new updates

Nodes => A node is an instance of an OS that serves as a worker machine in a Kubernetes cluster.

Also, Node will have two components 

  • Kubelet => Agent for managing and communicating with the master
  • Tool (Docker/containers) => Tools for running container operations
Kubernetes Cluster

It is designed based on the ground up as a loosely coupled collection of containers centred around deploying, maintaining, and scaling workloads. Works as an engine for resolving state by converging the actual and the desired state of the system (self-healing). Hidden from the underlying hardware of the nodes and provides a uniform interface for workloads to be both deployed and consume the shared pool of resources(hardware) in order to simplify deployment.

Pods are the smallest unit of objects that can be deployed on Kubernetes, Kubernetes packages one or more containers into a higher-level structure called a pod. Pod runs one level higher than the container.

A POD always runs on a Node but they share a few resources which can be Shared Volumes, Cluster Unique IP, and Info about how to run each container.  All containers in the pod are going to be scheduled on an equivalent node.

Services are the unified way of accessing the workloads on the pods, The Control plane which is the core of Kubernetes is an API server that lets you query, and manipulate the state of an object in Kubernetes.

POD

The following image describes the work-flow of Kubernetes from a high level, wherein the application description is a YAML file also known as a configuration or spec file with the help of which we can deploy applications bundled in the form of pods in cluster or node

Kubernetes Flow

Basic Kubernetes Interview Questions

1. How to do maintenance activity on the K8 node?

Whenever there are security patches available the Kubernetes administrator has to perform the maintenance task to apply the security patch to the running container in order to prevent it from vulnerability, which is often an unavoidable part of the administration. The following two commands are useful to safely drain the K8s node.

  • kubectl cordon
  • kubectl drain –ignore-daemon set

The first command moves the node to maintenance mode or makes the node unavailable, followed by kubectl drain which will finally discard the pod from the node. After the drain command is a success you can perform maintenance.

Note: If you wish to perform maintenance on a single pod following two commands can be issued in order:

  • kubectl get nodes: to list all the nodes
  • kubectl drain <node name>: drain a particular node
Create a free personalised study plan Create a FREE custom study plan
Get into your dream companies with expert guidance
Get into your dream companies with expert..
Real-Life Problems
Prep for Target Roles
Custom Plan Duration
Flexible Plans

2. How to get the central logs from POD?

This architecture depends upon the application and many other factors. Following are the common logging patterns

  • Node level logging agent.
  • Streaming sidecar container.
  • Sidecar container with the logging agent.
  • Export logs directly from the application.

In the setup, journalbeat and filebeat are running as daemonset. Logs collected by these are dumped to the kafka topic which is eventually dumped to the ELK stack.

The same can be achieved using EFK stack and fluentd-bit.

3. How to monitor the Kubernetes cluster?

Prometheus is used for Kubernetes monitoring. The Prometheus ecosystem consists of multiple components.

  • Mainly Prometheus server which scrapes and stores time-series data.
  • Client libraries for instrumenting application code.
  • Push gateway for supporting short-lived jobs.
  • Special-purpose exporters for services like StatsD, HAProxy, Graphite, etc.
  • An alert manager to handle alerts on various support tools.
You can download a PDF version of Kubernetes Interview Questions.

Download PDF


Your requested download is ready!
Click here to download.

4. What are the various things that can be done to increase Kubernetes security?

By default, POD can communicate with any other POD, we can set up network policies to limit this communication between the PODs.

  • RBAC (Role-based access control) to narrow down the permissions.
  • Use namespaces to establish security boundaries.
  • Set the admission control policies to avoid running the privileged containers.
  • Turn on audit logging.

5. What is the role of Load Balance in Kubernetes?

Load balancing is a way to distribute the incoming traffic into multiple backend servers, which is useful to ensure the application available to the users.

Load Balancer

In Kubernetes, as shown in the above figure all the incoming traffic lands to a single IP address on the load balancer which is a way to expose your service to outside the internet which routes the incoming traffic to a particular pod (via service) using an algorithm known as round-robin. Even if any pod goes down load balances are notified so that the traffic is not routed to that particular unavailable node. Thus load balancers in Kubernetes are responsible for distributing a set of tasks (incoming traffic) to the pods

Learn via our Video Courses

6. What’s the init container and when it can be used?

 init containers will set a stage for you before running the actual POD.

Wait for some time before starting the app Container with a command like sleep 60.

Clone a git repository into a volume.

7. What is PDB (Pod Disruption Budget)?

A Kubernetes administrator can create a deployment of a kind: PodDisruptionBudget for high availability of the application, it makes sure that the minimum number is running pods are respected as mentioned by the attribute minAvailable spec file. This is useful while performing a drain where the drain will halt until the PDB is respected to ensure the High Availability(HA) of the application. The following spec file also shows minAvailable as 2 which implies the minimum number of an available pod (even after the election).

Example: YAML Config using minAvailable => 

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
 name: zk-pdb
spec:
 minAvailable: 2
 selector:
   matchLabels:
     app: zookeeper
Advance your career with   Mock Assessments Refine your coding skills with Mock Assessments
Real-world coding challenges for top company interviews
Real-world coding challenges for top companies
Real-Life Problems
Detailed reports

8. What are the various K8's services running on nodes and describe the role of each service?

Mainly K8 cluster consists of two types of nodes, executor and master.

Executor node: (This runs on master node)

  • Kube-proxy: This service is responsible for the communication of pods within the cluster and to the outside network, which runs on every node. This service is responsible to maintain network protocols when your pod establishes a network communication.
  • kubelet: Each node has a running kubelet service that updates the running node accordingly with the configuration(YAML or JSON) file. NOTE: kubelet service is only for containers created by Kubernetes.

Master services:

  • Kube-apiserver: Master API service which acts as an entry point to K8 cluster.
  • Kube-scheduler: Schedule PODs according to available resources on executor nodes.
  • Kube-controller-manager:  is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired stable state

9. How do we control the resource usage of POD?

With the use of limit and request resource usage of a POD can be controlled.

Request: The number of resources being requested for a container. If a container exceeds its request for resources, it can be throttled back down to its request.

Limit: An upper cap on the resources a single container can use. If it tries to exceed this predefined limit it can be terminated if K8's decides that another container needs these resources. If you are sensitive towards pod restarts, it makes sense to have the sum of all container resource limits equal to or less than the total resource capacity for your cluster.

Example:

apiVersion: v1
kind: Pod
metadata:
 name: demo
spec:
 containers:
 - name: example1
 image:example/example1
 resources:
   requests:
     memory: "_Mi"
     cpu: "_m"
   limits:
     memory: "_Mi"
     cpu: "_m"

Intermediate Interview Questions

1. What is Ingress Default Backend?

It specifies what to do with an incoming request to the Kubernetes cluster that isn't mapped to any backend i.e what to do when no rules being defined for the incoming HTTP request If the default backend service is not defined, it's recommended to define it so that users still see some kind of message instead of an unclear error.

2. What is GKE?

GKE is Google Kubernetes Engine that is used for managing and orchestrating systems for Docker containers. With the help of Google Public Cloud, we can also orchestrate the container cluster.

3. What is the purpose of operators?

As compared to stateless applications, achieving desired status changes and upgrades are handled the same way for every replica, managing Kubernetes applications is more challenging. The stateful nature of stateful applications may require different handling for upgrading each replica, as each replica might be in a different state. Therefore, managing stateful applications often requires a human operator. This is supposed to be assisted by Kubernetes Operator. Moreover, this will pave the way for a standard process to be automated across several Kubernetes clusters.

4. What is an Operator?

As an extension to K8, the operator provides the capability of managing applications and their components using custom resources. Operators generally comply with all the principles relating to Kubernetes, especially those relating to the control loops.

5. What service and namespace are referred to in the following file?

apiVersion: v1
kind: ConfigMap
metadata:
  name: some-configmap
data:
  some_url: silicon.chip

It is clear from the above file that the service “silicon” is a reference to a namespace called “chip”.

6. Why should namespaces be used? How does using the default namespace cause problems?

Over the course of time, using the default namespace alone is proving to be difficult, since you are unable to get a good overview of all the applications you can manage within the cluster as a whole. The namespaces allow applications to be organized into groups that make sense, such as a namespace for all monitoring applications and another for all security applications. 

Additionally, namespaces can be used for managing Blue/Green environments, in which each namespace contains its own version of an app as well as sharing resources with other namespaces (such as logging or monitoring). It is also possible to have one cluster with multiple teams using namespaces. The use of the same cluster by multiple teams may lead to conflict.  Suppose they end up creating an app that has the same name, this means that one team will override the app created by the other team as Kubernetes prohibits two apps with the same name (within the same namespace).

7. How should TLS be configured with Ingress?

Add tls and secretName entries.

spec:
 tls:
 - hosts:
   - some_app.com
   secretName: someapp-secret-tls

8. Complete the following configurationspec file to make it Ingress

metadata:
  name: someapp-ingress
spec:

Explanation -

One of the several ways to answer this question.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: someapp-ingress
spec:
 rules:
 - host: my.host
   http:
     paths:
     - backend:
         serviceName: someapp-internal-service
         servicePort: 8080

9. How to turn the service defined below in the spec into an external one?

spec:
  selector:
    app: some-app
  ports:
    - protocol: UDP
      port: 8080
      targetPort: 8080

Explanation - 

Adding type: LoadBalancer and nodePort as follows:

spec:
 selector:
   app: some-app
 type: LoadBalancer
 ports:
   - protocol: UDP
     port: 8080
     targetPort: 8080
     nodePort: 32412

Kubernetes Interview Questions For Experienced

1. How to troubleshoot if the POD is not getting scheduled?

In K8’s scheduler is responsible to spawn pods into nodes. There are many factors that can lead to unstartable POD. The most common one is running out of resources, use the commands like kubectl describe <POD> -n <Namespace> to see the reason why POD is not started. Also, keep an eye on kubectl to get events to see all events coming from the cluster.

2. How can we forward the port '8080 (container) -> 8080 (service) -> 8080 (ingress) -> 80 (browser)and how it can be done?

The ingress is exposing port 80 externally for the browser to access, and connecting to a service that listens on 8080. The ingress will listen on port 80 by default. An "ingress controller" is a pod that receives external traffic and handles the ingress and is configured by an ingress resource For this you need to configure the ingress selector and if no 'ingress controller selector' is mentioned then no ingress controller will manage the ingress.

Simple ingress Config will look like

host: abc.org
http:
paths:
backend:
serviceName: abc-service
servicePort: 8080
Then the service will look like
kind: Service
apiVersion: v1
metadata:
name: abc-service
spec:
ports:
protocol: TCP
port: 8080 # port to which the service listens to
targetPort: 8080

Additional Resources

Kubernetes Vs Openshift

Kubernetes Cheat Sheet

Kubectl Commands

Kubernetes vs Docker

3. What are the different ways to provide external network connectivity to K8?

By default, POD should be able to reach the external network but vice-versa we need to make some changes. Following options are available to connect with POD from the outer world.

  • Nodeport (it will expose one port on each node to communicate with it)
  • Load balancers (L4 layer of TCP/IP protocol)
  • Ingress (L7 layer of TCP/IP Protocol)

Another method is to use Kube-proxy which can expose a service with only cluster IP on the local system port.

$ kubectl proxy --port=8080 $ http://localhost:8080/api/v1/proxy/namespaces//services/:/

4. How to run a POD on a particular node?

Various methods are available to achieve it.

  • nodeName: specify the name of a node in POD spec configuration, it will try to run the POD on a specific node.
  • nodeSelector: Assign a specific label to the node which has special resources and use the same label in POD spec so that POD will run only on that node.
  • nodeaffinities: required DuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution are hard and soft requirements for running the POD on specific nodes. This will be replacing nodeSelector in the future. It depends on the node labels.

5. Can you explain the differences between Docker Swarm and Kubernetes?

Below are the main difference between Kubernetes and Docker:

  • The installation procedure of the K8s is very complicated but if it is once installed then the cluster is robust. On the other hand, the Docker swarm installation process is very simple but the cluster is not at all robust.
  • Kubernetes can process the auto-scaling but the Docker swarm cannot process the auto-scaling of the pods based on incoming load.
  • Kubernetes is a full-fledged Framework. Since it maintains the cluster states more consistently so autoscaling is not as fast as Docker Swarm.

6. What the following in the Deployment configuration file mean?

spec:
  containers:
    - name: USER_PASSWORD
      valueFrom:
        secretKeyRef:
          name: some-secret
          key: password

Explanation -

USER_PASSWORD environment variable will store the value from the password key in the secret called "some-secret" In other words, you reference a value from a Kubernetes Secret.

7. What is Kubernetes Load Balancing?

Load Balancing is one of the most common and standard ways of exposing the services. There are two types of load balancing in K8s and they are:

Internal load balancer – This type of balancer automatically balances loads and allocates the pods with the required incoming load.

External Load Balancer – This type of balancer directs the traffic from the external loads to backend pods.

8. How to run Kubernetes locally?

Kubernetes can be set up locally using the Minikube tool. It runs a single-node bunch in a VM on the computer. Therefore, it offers the perfect way for users who have just ongoing learning Kubernetes.

Kubernetes MCQ

1.

What are the main benefits that Deployments offer that Replication Controllers do not?

2.

What is the default Service type (ServiceType) if you do NOT specify a value?

3.

What is the default protocol for a Service?

4.

Which of the following commands allow you to validate a cluster created with Kubernetes operations?

5.

Which of the following describes the Google Container Engine (GKE)?

6.

Which of the following is true about Pods and IP addressing?

7.

What is the atomic unit of scheduling in K8s?

8.

What is the default range of ports used to expose a NodePort service?

9.

Which of the following kubeadm command creates a new cluster?

10.

Which component of the K8's worker stack registers Nodes with the cluster and watches the "apiserver" for new work?

11.

Which of the following commands gives you detailed info on a Pod?

12.

You want to deploy two tightly coupled containers that share a volume and some memory. What is the best option?

13.

Which of the following is the best option for creating a local Kubernetes development environment on your local machine?

14.

Which programming language is Kubernetes written in?

15.

Which Operating System does Kubernetes run on?

Excel at your interview with Masterclasses Know More
Certificate included
What will you Learn?
Free Mock Assessment
Fill up the details for personalised experience.
Phone Number *
OTP will be sent to this number for verification
+91 *
+91
Change Number
Graduation Year *
Graduation Year *
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
*Enter the expected year of graduation if you're student
Current Employer
Company Name
College you graduated from
College/University Name
Job Title
Job Title
Engineering Leadership
Software Development Engineer (Backend)
Software Development Engineer (Frontend)
Software Development Engineer (Full Stack)
Data Scientist
Android Engineer
iOS Engineer
Devops Engineer
Support Engineer
Research Engineer
Engineering Intern
QA Engineer
Co-founder
SDET
Product Manager
Product Designer
Backend Architect
Program Manager
Release Engineer
Security Leadership
Database Administrator
Data Analyst
Data Engineer
Non Coder
Other
Please verify your phone number
Edit
Resend OTP
By clicking on Start Test, I agree to be contacted by Scaler in the future.
Already have an account? Log in
Free Mock Assessment
Instructions from Interviewbit
Start Test