Kubernetes¶
Kubernetes (K8s) orchestrates containerized applications — automating deployment, scaling, and management. Key concepts: Pod (smallest deployable unit, 1+ containers), Deployment (manages replica sets), Service (stable networking for pods), ConfigMap/Secret (configuration), Ingress (HTTP routing). K8s ensures desired state: if a pod dies, it's automatically replaced.
Key Concepts¶
Deep Dive: Core Architecture
┌──────────────── Control Plane ──────────────────┐
│ API Server → etcd (state store) │
│ Scheduler → Controller Manager │
└─────────────────────┬───────────────────────────┘
│
┌─────── Worker Node 1 ───────┐ ┌─── Worker Node 2 ───┐
│ kubelet │ kube-proxy │ │ kubelet │ kube-proxy│
│ ┌─────┐ ┌─────┐ │ │ ┌─────┐ │
│ │Pod A│ │Pod B│ │ │ │Pod C│ │
│ └─────┘ └─────┘ │ │ └─────┘ │
└─────────────────────────────┘ └─────────────────────┘
Deep Dive: Key Resources
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Service (ClusterIP, NodePort, LoadBalancer):
Deep Dive: Scaling & Self-Healing
# Manual scaling
kubectl scale deployment my-app --replicas=5
# Auto-scaling
kubectl autoscale deployment my-app --min=2 --max=10 --cpu-percent=80
Self-healing: If a pod crashes, the Deployment controller creates a replacement.
Rolling update: Gradually replaces old pods with new ones (zero downtime).
Rollback: kubectl rollout undo deployment my-app
Common Interview Questions
- What is Kubernetes? What problem does it solve?
- What is a Pod? How is it different from a container?
- What is a Deployment? What is a ReplicaSet?
- What are Service types (ClusterIP, NodePort, LoadBalancer)?
- What is an Ingress?
- How does K8s handle scaling?
- What is a ConfigMap/Secret?
- How does K8s ensure high availability?