Q: What is the architecture for K8s ?
Ans:
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
The key objects are:
Pods
A pod is a higher level of abstraction grouping containerized components. A pod consists of one or more containers that are guaranteed to
be co-located on the host machine and can share resources. The basic scheduling unit in Kubernetes is a pod.
Each pod in Kubernetes is assigned a unique Pod IP address within the cluster, which allows applications to use ports without the risk of conflict.
Within the pod, all containers can reference each other on localhost, but a container within one pod has no way of directly addressing another container within another pod; for that, it has to use the Pod IP Address. An application developer should never use the Pod IP Address though, to reference / invoke a capability in another pod, as Pod IP addresses are ephemeral - the specific pod that they are referencing may be assigned to another Pod IP address on restart. Instead, they should use a reference to a Service, which holds a reference to the target pod at the specific Pod IP Address.
A pod can define a volume, such as a local disk directory or a network disk, and expose it to the containers in the pod.
Pods can be managed manually through the Kubernetes API, or their management can be delegated to a controller.
Such volumes are also the basis for the Kubernetes features of ConfigMaps (to provide access to configuration through the filesystem visible to the container) and
Secrets (to provide access to credentials needed to access remote resources securely, by providing those credentials on the filesystem visible only to authorized containers).
Replica Sets
A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a
specified number of identical Pods.
They (Replica Sets) can also be said to be a grouping mechanism that lets Kubernetes maintain the number of instances that have been declared for a given pod.
The definition of a Replica Set uses a selector, whose evaluation will result in identifying all pods that are associated with it
Services
Simplified view showing how Services interact with Pod networking in a Kubernetes cluster
A Kubernetes service is a set of pods that work together, such as one tier of a multi-tier application. The set of pods that constitute a
service are defined by a label selector. Kubernetes provides two modes of service discovery, using environmental variables or using Kubernetes DNS.
Service discovery assigns a stable IP address and DNS name to the service, and load balances traffic in a round-robin manner to network connections of that
IP address among the pods matching the selector (even as failures cause the pods to move from machine to machine). By default a service is
exposed inside a cluster (e.g., back end pods might be grouped into a service, with requests from the front-end pods load-balanced among them),
but a service can also be exposed outside a cluster (e.g., for clients to reach front-end pods).
Volumes
Filesystems in the Kubernetes container provide ephemeral storage, by default. This means that a restart of the pod will
wipe out any data on such containers, and therefore, this form of storage is quite limiting in anything but trivial applications.
A Kubernetes Volume provides persistent storage that exists for the lifetime of the pod itself. This storage can also be used as shared disk space
for containers within the pod. Volumes are mounted at specific mount points within the container, which are defined by the pod configuration, and cannot
mount onto other volumes or link to other volumes. The same volume can be mounted at different points in the filesystem tree by different containers.
Deployments
Deployments are a higher level management mechanism for replica sets. While the Replication Controller manages the scale of the Replica Set, Deployments
will manage what happens to the Replica Set - whether an update has to be rolled out, or rolled back, etc. When deployments are scaled up or down, this results in the declaration
of the Replica Set changing - and this change in declared state is managed by the Replication Controller.
===================================================================
Kubernetes Componenets:-
Etcd
Etcd is a distributed, consistent key-value store used for configuration management, service discovery, and coordinating distributed work.
When it comes to Kubernetes, etcd reliably stores the configuration data of the Kubernetes cluster, representing the state of the cluster (what nodes exist in the cluster,
what pods should be running, which nodes they are running on, and a whole lot more) at any given point of time.
API Server
When you interact with your Kubernetes cluster using the kubectl command-line interface, you are actually communicating with the master API Server component.
The API Server is the main management point of the entire cluster. In short, it processes REST operations, validates them, and updates the corresponding objects in etcd.
The API Server serves up the Kubernetes API and is intended to be a relatively simple server, with most business logic implemented in separate components or in plugins.
Controller Manager
The Kubernetes Controller Manager is a daemon that embeds the core control loops (also known as “controllers”) shipped with Kubernetes. Basically,
a controller watches the state of the cluster through the API Server watch feature and, when it gets notified, it makes the necessary changes attempting to move the
current state towards the desired state. Some examples of controllers that ship with Kubernetes include the Replication Controller, Endpoints Controller, and Namespace Controller.
Besides, the Controller Manager performs lifecycle functions such as namespace creation and lifecycle, event garbage collection, terminated-pod garbage
collection, cascading-deletion garbage collection, node garbage collection, etc.
Scheduler
The Scheduler watches for unscheduled pods and binds them to nodes via the /binding pod subresource API, according to the availability of the requested resources,
quality of service requirements, affinity and anti-affinity specifications, and other constraints. Once the pod has a node assigned, the regular behavior of the
Kubelet is triggered and the pod and its containers are created
Q: What are the key roles of kubelet component in kubernetes?
Ans:
The kubelet is a crucial component in Kubernetes that manages individual nodes in a cluster. Its key roles include:
1)Container Management: It starts, stops, and monitors containers on a node based on Pod specifications.
2)Pod Lifecycle Management: It ensures the desired state of Pods is maintained by starting, stopping, and restarting containers as needed.
3)Node Monitoring: It continuously monitors the health and resource usage of the node and reports this information to the control plane.
4)Network Management: It configures network interfaces, assigns IP addresses to Pods, and maintains network connectivity within the cluster.
5)Resource Management: It enforces resource allocation and usage policies, ensuring containers stay within their allocated boundaries.
6)Volume Management: It handles volume lifecycle, including mounting, creation, deletion, and attachment/detachment.
7)Image Management: It pulls container images onto the node, manages image caching, and optimizes resource usage.
8)Security and Isolation: It applies security policies and enforces resource quotas and limits to ensure containers run securely and in isolation.
Q: What are important commands in kubernetes
Ans:
To check all nodes
kubectl get nodes
To check details of the Nodes
kubectl describe nodes
To check more information about nodes
kubectl get nodes -o wide
To check no. of pods created
kubectl get pods
To check details of the pod
kubectl describe pods
To check more information about pods
kubectl get pods -o wide
To enter into the pods
kubectl exec -it <pod name> -- bash
To check replicatset
kubectl get replicaset
To check details of replicaset
kubectl describe replicaset
To check more details about replicaset
kubectl get replicaset -o wide
To create a pod with yaml file
kubectl create -f pod-definition.yml
To create replicaset with yaml file
kubectl create -f replicaset-definition.yml
To create deployment with yaml file
kubectl create -f deployment-definition.yml
To create deployment with --record to track change cause
kubectl create -f deployment-definition.yml --record
To list all deployments
kubectl get deployments
To check details about deployments
kubectl describe deployment
To check more details about deployments
kubectl get deployments -o wide
To check deployment in namespace
kubectl get deploy -n <namespace>
To check all objects & its properties in K8a Cluster
kubectl get all
To delete any pod
kubectl delete pod <pod1> <pod2>
To delete any replicaset
kubectl delete rs <replicasetname>
To delete any namespace with yaml
kubectl create -f <namespace.yml>
To check all namespace
kubectl get ns
To set namespace setcontext in kub/config file
kubectl config set-context --current --namespace=<namespace>
Q: what is pre-requisite to manage Amazon Elastic Kubernetes Service?
Ans:
you must install and configure the following tools and resources that you need to create and manage an Amazon EKS cluster.
kubectl – A command line tool for working with Kubernetes clusters. This guide requires that you use version 1.25 or later. For more information, see Installing or updating kubectl.
eksctl – A command line tool for working with EKS clusters that automates many individual tasks. This guide requires that you use version 0.134.0 or later. For more information, see Installing or updating eksctl.
Required IAM permissions – The IAM security principal that you're using must have permissions to work with Amazon EKS IAM roles and service linked roles,
AWS CloudFormation, and a VPC and related resources. For more information, see Actions, resources, and condition keys for Amazon Elastic Container Service for
Kubernetes and Using service-linked roles in the IAM User Guide. You must complete all steps in this guide as the same user.
Q:How to configure your computer to communicate with your cluster
Ans:
Create or update a kubeconfig file for your cluster. Replace region-code with the AWS Region that you created your cluster in. Replace my-cluster with the name of your cluster.
aws eks update-kubeconfig --region region-code --name my-cluster
By default, the config file is created in ~/.kube or the new cluster's configuration is added to an existing config file in ~/.kube.
Test your configuration.
kubectl get svc
Q:How to enter into the pod ?
Ans:
- kubectl get pods
- kubectl exec -it <pod name> --bash
- cd htdocs ==> in this location we find index.html
Q:How to capture all logs of the pod into a file?
Ans: kubectl logs -f <pod name> > test.txt
Q:How to capture all namespace in cluster?
Ans: kubectl get ns
Q:How to capture current states of pods?
Ans: kubectl get pods --watch
Q:what is endpoint of node in kubernetes service
Ans:
An endpoint is a resource that gets the IP addresses of one or more pods dynamically assigned to it, along with a port.
An endpoint can be viewed using kubectl get endpoints
Q: What are K8s services ?
ClusterIP
A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access.
NodePort
A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
LoadBalancer
A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service.
Ingress
Unlike all the above examples, Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster.
You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities.
The default GKE ingress controller will spin up a HTTP(S) Load Balancer for you. This will let you do both path based and subdomain based routing to backend services.
For example, you can send everything on foo.yourdomain.com to the foo service, and everything under the yourdomain.com/bar/ path to the bar service.
Comments
Post a Comment