Basic

Common root elements of all K8S config files:

1
2
3
4
5
6
7
8
apiVersion:
kind:
metadata:
  name:
  namespace: # OPTIONAL
  labels:    # OPTIONAL

spec:

apiVersion ~ kind mapping

Kind API Version Short Name
Pod v1 po
ReplicationController v1 rc
ReplicaSet apps/v1 rs
Deployment apps/v1 deploy
Service v1 svc
PersistentVolume v1 pv
PersistentVolumeClaim v1 pvc
Namespace v1 ns
ConfigMap v1 cm
Secret v1 -
  • kubectl api-resources - shows complete list of above table

Pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1 
kind: Pod
metadata:
  name: sample-pod
  labels:
    app: test-pod

spec:
  containers:
    - name: busybox
      image: busybox:1.32
      tty: true
    - name: nginx
      image: nginx:1.18.0
  • kubectl describe pod sample-pod
  • kubectl exec -it sample-pod -- sh (busybox)
  • kubectl exec -c nginx -it sample-pod -- bash (nginx)
  • All containers in the same Pod have access each other on localhost
    1
    2
    3
    4
    5
    
    user$ kubectl exec -c busybox sample-pod -- netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      -
    tcp        0      0 :::80                   :::*                    LISTEN      -
    
  • EXPOSE as SERVICE - kubectl expose pod sample-pod --name=sample-srv --port=8080 --target-port=80

Note: kubectl run redis --image=redis --restart=Never --dry-run -o yamle > redis-pod.yml

Container Lifecycle Events

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
apiVersion: v1 
kind: Pod
metadata:
  name: sample-pod
  labels:
    app: test-pod

spec:
  containers:

    - name: nginx
      image: nginx:1.18.0

    - name: busybox
      image: busybox:1.32
      command:
        - sh
        - -c
        - |
          for i in $(seq 2 2 8); do
            echo "Main: $i" >> /var/main.log
            sleep 2
          done
          sh
      tty: true
      lifecycle:
        postStart:
          exec:
            command:
              - sh
              - -c
              - |
                for i in $(seq 1 1 8); do
                  echo "PostStart: $i" >> /var/post-start.log
                  sleep 2
                done

Now watch the output:

1
2
3
4
5
6
7
8
watch -t "
  kubectl get po; 
  echo '\n---------- main.log\n';
  kubectl exec -it sample-pod -c busybox -- cat /var/main.log
  echo '\n---------- post-start.log\n';
  kubectl exec -it sample-pod -c busybox -- cat /var/post-start.log
  echo '\n---------- Nginx\n';
  kubectl logs sample-pod nginx"
  • Two Events [REF]
    • postStart
      • no guarantee that the hook will execute before the container ENTRYPOINT
      • runs asynchronously relative to the Container’s code
    • preStop
    • For each, only use one of the followings
      • exec
      • httpGet
      • tcpSocket

Replication Controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
kind: ReplicationController
metadata:
  name: sample-rc

spec:
  replicas: 2
  template: # POD DEFINITION vvv
    metadata:
      # 'name' here is IGNORED
      labels: # REQUIRED for its implicit selector
        app: test-rc
    spec:
      containers:
        - name: busybox
          image: busybox:1.32
          tty: true
        - name: nginx
          image: nginx:1.18.0
          ports: # OPTIONAL (nginx is on port 80 by default)
            - containerPort: 80

ReplicaSet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: sample-rs

spec:
  replicas: 2
  selector: # REQUIRED
    matchLabels:
      app: test-rs
  template: # POD DEFINITION vvv
    metadata:
      labels:
        app: test-rs
    spec:
      containers:
        - name: busybox
          image: busybox:1.32
          tty: true
        - name: nginx
          image: nginx:1.18.0
          ports:
            - containerPort: 80
  • Change Replicas:
    • Update file with new replicas value and kubectl replace -f FILE (apply works the same)
    • kubectl scale --replicas=3 -f FILE
    • kubectl scale --replicas=3 replicaset sample-rs
  • Non-Template Pod acquisitions [REF]

    While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limited to owning Pods specified by its template– it can acquire other Pods in the manner specified in the previous sections.

Deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-dep
spec:
  replicas: 2
  selector:
     matchLabels:
       app: test-dep
  template:
    metadata:
      labels:
        app: test-dep
    spec:
      containers:
        - name: busybox
          image: busybox:1.32
          tty: true
        - name: nginx
          image: nginx:1.18.0
  • Mostly similar to ReplicaSet!
  • Deployment Strategy - Recreate and Rolling Update (default)
  • Rolling
    • kubectl rollout status deployment sample-dep - shows current deployment status
    • kubectl rollout restart deployment sample-dep - restart deployment (i.e. redeploy)
    • kubectl rollout history deployment sample-dep - shows all revisions
    • kubectl rollout history deployment sample-dep --revision=NUM - shows specific revision detail
    • kubectl rollout undo deployment sample-dep - rollbacks to previous deployment
    • kubectl set image deployment sample-dep busybox=1.32-glibc
  • kubectl run nginx --image=nginx - creates a deployment

Service

Kubernetes networking addresses four concerns [REF]:

  • Containers within a Pod use networking to communicate via loopback.
  • Cluster networking provides communication between different Pods.
  • The Service resource lets you expose an application running in Pods to be reachable from outside your cluster.
  • You can also use Services to publish services only for consumption inside your cluster.

Four Types [REF]

  • ClusterIP
    • Default Value
    • Makes the Service only reachable from within the cluster
  • NodePort
    • Exposes the Service on each Node’s IP at a static port
    • A ClusterIP Service, to which the NodePort Service routes, is automatically created
  • LoadBalancer
    • Exposes the Service externally using a cloud provider’s load balancer
    • NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created
  • ExternalName
    • Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value
    • No proxying of any kind is set up

Cluster IP

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
  name: sample-srv-cip
spec:
  # type: ClusterIP # NOT REQUIRED, AS DEFAULT
  selector:
    app: test-dep
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  • kubectl exec -it sample-dep-XXXXXXXX -- sh
    • wget -qO - http://sample-srv-cip
    • Search DNS
      1
      2
      3
      4
      5
      6
      
      # nslookup sample-srv-cip
      Server:         10.43.0.10
      Address:        10.43.0.10:53
          
      Name:   sample-srv-cip.default.svc.cluster.local
      Address: 10.43.110.139
      

      So sample-srv-cip has all alternative names:

      • sample-srv-cip.default
      • sample-srv-cip.default.svc
      • sample-srv-cip.default.svc.cluster.local
  • FORMAT SERVICE_NAME.NAMESPACE.svc.DOMAIN
  • Now copy deployment and service as sample-dep2 and sample-srv-cip2, and apply them.
    • You can ping and wget other services from the pods.

Node Port

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
  name: sample-srv-np
spec:
  type: NodePort
  selector:
    app: test-dep
  ports:
    - protocol: TCP
      port: 80         # REQUIRED
      targetPort: 80   # if not set, same as port
      nodePort: 31111  # DEFAULT RANGE [30000, 32767]. If not set, get randomly from the range  
  • Access http://NODE:31111

Volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
kind: Pod
metadata:
  name: sample-pod-vol
  labels:
    app: test-pod-vol

spec:
  containers:
    - name: busybox
      image: busybox:1.32
      tty: true
      volumeMounts:
        - mountPath: /data
          name: data-dir
    - name: nginx
      image: nginx:1.18.0
      volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: html-dir
          readOnly: true
  volumes:
    - name: data-dir
      hostPath:
        path: /opt/test-pod
        type: DirectoryOrCreate
    - name: html-dir
      hostPath:
        path: /opt/test-pod
        type: DirectoryOrCreate
  • kubectl exec -it sample-pod-vol -- sh
    1
    2
    3
    4
    5
    
    # wget -qO - http://localhost
    wget: server returned error: HTTP/1.1 403 Forbidden
    # echo "Hello" > /data/index.html
    # wget -qO - http://localhost
    Hello
    

Storage

PV

[REF]

  • A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
  • Independent lifecycle of any Pod that uses the PV
  • Two types of provisioning
    • Static
    • Dynamic
      • No match for user’s PVC request
      • Based on StorageClass, the PVC request must refer a storage class
      • Administrator must have created and configured that class for dynamic provisioning
      • Claims that request the class "" effectively disable dynamic provisioning for themselves.
  • A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is a bi-directional binding between the PV and the PVC
  • Reserve a PV
    • Specifying a PV inside PVC
    • Semi-Reserve: using labels in PV and selectors in PVC (not in the REF)
  • Access modes:
    • ReadWriteOnce – the volume can be mounted as read-write by a single node
    • ReadOnlyMany – the volume can be mounted read-only by many nodes
    • ReadWriteMany – the volume can be mounted as read-write by many nodes
  • PVC
    • Request for storage by a user and consume PV resources
    • Setting label on PV can be ignored for matching, however the selector is considered during matching
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: PersistentVolume
metadata:
  name: sample-pv1
  labels:
    type: nfs
    target: sample1
spec:
  accessModes:                             # REQUIRED
    - ReadWriteOnce
  capacity:                                # REQUIRED
    storage: 1Gi                           # UNITS: Ki,Mi,Gi,Ti (power-of-two units)
  #volumeMode: Filesystem                  # Filesystem (default), Block
  #persistentVolumeReclaimPolicy: Retain   # Retain (default), Delete, Recycle (deprecated)
  nfs:
    path: /opt/NFS/TEST
    server: nfs.server.local
  • kubectl get pv
    1
    2
    
    NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    sample-pv1  1Gi        RWO            Retain           Available                                   65s
    

PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sample-pvc1
spec:
  accessModes:          # REQUIRED
    - ReadWriteOnce
  resources:            # REQUIRED
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      type: nfs
      target: sample1
  • kubectl get pvc
    1
    2
    
    NAME         STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    sample-pvc1  Bound    sample-pv1  1Gi        RWO                           16s
    
  • kubectl get pv
    1
    2
    
    NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
    sample-pv1   1Gi        RWO            Retain           Bound    default/sample-pvc1                           39s
    

Usage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: v1
kind: Pod
metadata:
  name: sample-pod-pv.pvc1
  labels:
    app: test-pod-pv.pvc1

spec:
  containers:
    - name: busybox
      image: busybox:1.32
      tty: true
      volumeMounts:
        - mountPath: /data
          name: html-dir
    - name: nginx
      image: nginx:1.18.0
      volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: html-dir

  volumes:
    - name: html-dir
      persistentVolumeClaim:
        claimName: sample-pvc1

Configuration Data

Config Map

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: v1
kind: ConfigMap
metadata:
  name: sample-cm
data:
  K1: V1
  K2: V2
  other.txt: |
    This is a text content, stored in file via ConfigMap volume!

---

apiVersion: v1 
kind: Pod
metadata:
  name: sample-pod-cm
spec:
  containers:
    - name: busybox
      image: busybox:1.32
      tty: true
      envFrom:
        - configMapRef:
            name: sample-cm     # INCLUDE ALL KEYS
      env:
        - name: K11
          valueFrom:
            configMapKeyRef:
              name: sample-cm
              key: K1           # JUST GET THE KEY
  volumes:
    - name: cmdata
      configMap:
        name: sample-cm
  • kubectl exec -it sample-pod-cm -- env
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    HOSTNAME=sample-pod-cm
    TERM=xterm
    K1=V1
    K2=V2
    K11=V1
    KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
    KUBERNETES_SERVICE_HOST=10.43.0.1
    KUBERNETES_SERVICE_PORT=443
    KUBERNETES_SERVICE_PORT_HTTPS=443
    KUBERNETES_PORT=tcp://10.43.0.1:443
    KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
    KUBERNETES_PORT_443_TCP_PROTO=tcp
    KUBERNETES_PORT_443_TCP_PORT=443
    HOME=/root
    
  • kubectl exec -it sample-pod-cm -- sh
    1
    2
    3
    4
    5
    6
    7
    8
    
    # ls -l /opt/config-map
    total 0
    lrwxrwxrwx    1 root     root             9 Dec 14 09:12 K1 -> ..data/K1
    lrwxrwxrwx    1 root     root             9 Dec 14 09:12 K2 -> ..data/K2
    lrwxrwxrwx    1 root     root            16 Dec 14 09:12 other.txt -> ..data/other.txt
    
    # cat /opt/config-map/other.txt
    This is a text content, stored in file via ConfigMap volume!
    

Secret

  • Store and manage sensitive information
  • Secret vs ConfigMap [Ref1, Ref2]

    The big difference between Secrets and ConfigMaps are that Secrets are obfuscated with a Base64 encoding. There may be more differences in the future, but it is good practice to use Secrets for confidential data (like API keys) and ConfigMaps for non-confidential data (like port numbers).

  • Secret can simply be created by CLI or file:
    • CLI (imperative)
      • echo 'Sample File for Secret' > myfile.txt
      • kubectl create secret generic sample-sec --from-file=message.txt=myfile.txt --from-literal=K1=V1
    • File
      1
      2
      3
      4
      5
      6
      7
      8
      9
      
      apiVersion: v1
      kind: Secret
      metadata:
      name: sample-sec
      type: Opaque  # DEFAULT
      data:
      K1: V1
      message.txt: |
        Sample File for Secret
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: v1 
kind: Pod
metadata:
  name: sample-pod-sec
spec:
  containers:
    - name: busybox
      image: busybox:1.32
      tty: true
      env:
        - name: KEY1
          valueFrom:
            secretKeyRef:
              name: sample-sec
              key: K1
      volumeMounts:
        - name: secdata
          mountPath: "/etc/sec-data"
  volumes:
    - name: secdata
      secret:
        secretName: sample-sec
        items:
          - key: message.txt
            path: message.txt 
  • kubectl exec -it sample-pod-sec -- sh
    1
    2
    3
    4
    5
    6
    7
    8
    
    / # ls -l /etc/sec-data/
    total 0
    lrwxrwxrwx    1 root     root            18 Dec 24 10:08 message.txt -> ..data/message.txt
    
    / # env
    KEY1=V1
    KUBERNETES_PORT=tcp://10.43.0.1:443
    ...
    

Namespace

1
2
3
4
apiVersion: v1
kind: Namespace
metadata:
  name: sample-ns
  • OR kubectl create namespace smaple-ns
  • kubectl config set-context $(kubectl config current-context) --namespace=sample-ns - Update default namesapce for kubectl

Resource Quota

  • Provides constraints that limit aggregate resource consumption per namespace
  • Pod resource must be specified on enabled compute resources quota like cpu and memory
  • [REF]
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: ResourceQuota
metadata:
  name: sample-rq
  namespace: sample-ns
spec:
  hard:
    pods: "10"
    requests.cpu: "4"
    requests.memory: 5Gi
    limits.cpu: "4"
    limits.memory: 5Gi