Declarative Configuration
- You define desired state
- Kubernetes ensures actual state matches it
Structure of a YAML file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
template: # Pod template (blueprint)
Three sections
- metadata
- spec
- status (managed by Kubernetes)
- Current state from
etcd
- Kubernetes continuously compares:
spec vs status
Labels & Selectors
- Deployment
- Creates & manages Pods
- Uses
selector.matchLabels
- Service
- Finds Pods
- Uses
selector
- Load-balances traffic
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo # Name of the Deployment
labels:
app: mongodb # Label applied to the Deployment
spec:
replicas: 2 # Number of Pod replicas
selector:
matchLabels: # Selector tells Deployment which Pods it manages
app: mongodb
template: # Pod template (blueprint)
metadata:
labels:
app: mongodb # Labels applied to Pods (must match selector)
spec:
containers:
- name: mongodb
image: mongo:5.0 # MongoDB container image
ports:
- containerPort: 27017 # Port MongoDB listens on
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service # Stable network endpoint for MongoDB e.g mongodb://mongo-service:27017
spec:
type: ClusterIP # Internal service (default)
selector: # Service finds Pods using these labels
app: mongodb
ports:
- protocol: TCP
port: 27017 # Service port (clients connect here)
targetPort: 27017 # Forward traffic to containerPort
Web App Pod
|
| mongodb://mongo-service:27017
|
Service (Stable IP + DNS)
|
| load balancing
|
MongoDB Pod(s)