Launching Operating Systems: Bare Metal, Containerization, and Virtualization
Launching an operating system can be done using three main techniques:
Bare Metal: The OS runs directly on the physical hardware without any abstraction layer.
Containerization: Leverages technologies like Docker to encapsulate applications and their dependencies into containers that share the host OS kernel.
Virtualization: Uses hypervisors, such as VMware or Nitro in AWS, to create virtual machines that emulate physical hardware, allowing multiple OS instances to run on the same physical machine.
Introduction to Kubernetes
Kubernetes is a container orchestration platform that manages the deployment, scaling, and operation of application containers. It can operate in two environments:
On-Premises: Kubernetes is deployed within a private data center.
Cloud: Kubernetes is hosted in a cloud environment, which can be:
Public Cloud: Examples include AWS (EKS), Azure (AKS), and GCP (GKE).
Private Cloud: Kubernetes is deployed in a dedicated cloud environment.
Kubernetes Architecture
Kubernetes consists of the following key components:
Master Node (Manager Node): Manages the cluster and coordinates worker nodes.
Worker Nodes (Slave Nodes): Hosts application containers and provides the runtime environment for Kubernetes.
Node: A physical or virtual device with an OS and Docker installed.
AWS Elastic Kubernetes Service (EKS)
AWS EKS is a managed Kubernetes service that simplifies the deployment and operation of Kubernetes clusters in AWS. Below are the steps to work with AWS EKS:
Setting Up AWS EKS
Download and Configure AWS CLI:
Install the AWS CLI on your local device.
Configure it with your credentials using:
aws configure
Install eksctl:
- Use
eksctl
to manage your EKS cluster.
- Use
Launch a Cluster:
Create a cluster with the following command:
eksctl create cluster --name lwcluster
Verify Nodes:
Check the nodes in your cluster:
kubectl get nodes
Managing Deployments in Kubernetes
Create a Deployment:
kubectl create deployment <name> --image=<image_name>
View Pods:
kubectl get pods
View Deployments:
kubectl get deployments
Delete a Deployment:
kubectl delete deployment <name>
Scale a Deployment:
kubectl scale deployment <name> --replicas=<number_of_replicas>
Load Balancing in Kubernetes
Kubernetes uses different types of load balancers:
ClusterIP:
Default load balancer for internal communication within the cluster.
Pods are not exposed to the outside world.
External Load Balancer:
For a more robust solution, AWS Elastic Load Balancer (ELB) can be used.
Expose a deployment using:
kubectl expose deployment <name> --type=LoadBalancer --port=<port> --target-port=<target_port>
Additional Commands
Detailed Pod Information:
kubectl get pods -o wide
Delete an EKS Cluster:
eksctl delete cluster --name lwcluster
Load Balancing with Istio
By default, Kubernetes uses round-robin load balancing, which distributes traffic equally across pods.
For more advanced load balancing, such as weighted traffic distribution, Istio can be used.