/ Linux

Kubernetes - Growing the cluster with Centos 7 node

In my previous post we seen how to install and configure kubernetes master node and dashboard on Ubuntu 18.04. Now this post is about growing the Kubernetes master by joining more nodes. For this setup i am going to use a Centos 7 VM running in virtualbox.

vikki_centos7_vbox

Installation

Fist update the centos with all lastest packages

[root@drona-child-3 ~]# yum update -y

Install docker and enable in startup

[root@drona-child-3 ~]# yum install docker
[root@drona-child-3 ~]# systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

Now add the kubernetes repostiory to yum configuration

[root@drona-child-3 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF

Disable selinux. For permanant disable edit the file "/etc/sysconfig/selinux" otherwise the kube-flannel-xxx will goes to crashloop in next reboot.
After that install kubernetes packages and enable in startup.

[root@drona-child-3 ~]# setenforce 0
[root@drona-child-3 ~]# yum install -y kubelet kubeadm 
[root@drona-child-3 ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@drona-child-3 ~]# 

Add the host entry for name resolution

[root@drona-child-3 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.5 drona-child-1
192.168.0.4 drona-child-3
[root@drona-child-3 ~]# 

Disable swap

[root@drona-child-3 ~]# swapoff  -a

Till this step everything is same as we did in kubernetes master, except the difference of centos 7 operation system.

Adding nodes to the cluster

Now join the node to the kubernetes master using the join command. We already seen in the previous post how to get the token and hash in case you didn't note it during master installation.

[root@drona-child-3 ~]# kubeadm join 192.168.1.5:6443 --token o9an7t.o4bs1up74xjwnol3 --discovery-token-ca-cert-hash sha256:548c922cf4f845f3dc6d7da407516652879c8a5085c87e0322770e1475105591
[preflight] Running pre-flight checks.
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "192.168.1.5:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.5:6443"
[discovery] Requesting info from "https://192.168.1.5:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.5:6443"
[discovery] Successfully established connection with API Server "192.168.1.5:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Check if the flannel interface is created and should have the pod network ip 40.168.x.x

[root@drona-child-3 ~]# ip a show flannel.1
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether ee:0d:1d:5c:48:6a brd ff:ff:ff:ff:ff:ff
    inet 40.168.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::ec0d:1dff:fe5c:486a/64 scope link 
       valid_lft forever preferred_lft forever
[root@drona-child-3 ~]# 

Since cluster and authentication keys(~/.kube/config) not configured in secondary nodes(drona-child-3) we cannot run kubectl get nodes in secondary.

We have to do all the archestration activity from the master node. I am connecting to the master node and checking the nodes status

vikki@drona-child-1:~$ kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
drona-child-1   Ready     master    22h       v1.10.4
drona-child-3   Ready     <none>    1m        v1.10.4
vikki@drona-child-1:~$ 

Deploying a pod to kubernetes cluster

Let try deploying a pod. I am using nginx server. The below command will automatically pull the nginx image from the docker hub and deploy it as pod.

vikki@drona-child-1:~$ kubectl run nginx --image nginx
deployment.apps "nginx" created
vikki@drona-child-1:~$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-65899c769f-nllp5   1/1       Running   0          5m

Now we can see the deployment "nginx" is created.

vikki@drona-child-1:~$ kubectl get deployments
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     1         1         1            0           2s

To see more details about the deployment , use the describe command

vikki@drona-child-1:~$ kubectl describe deployment nginx 
Name:                   nginx
Namespace:              default
CreationTimestamp:      Fri, 15 Jun 2018 15:13:02 +0530
Labels:                 run=nginx
Annotations:            deployment.kubernetes.io/revision=1
Selector:               run=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  run=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-65899c769f (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  9m    deployment-controller  Scaled up replica set nginx-65899c769f to 1
vikki@drona-child-1:~$ 

Exposing the deployment to internal network

Now the nginx is deployed by the serices are not exposed. I am trying to expose here but its not working. This is because the default nginx image from docker don't have any ports configured.

vikki@drona-child-1:~$ kubectl expose deployment nginx 
error: couldn't find port via --port flag or introspection
See 'kubectl expose -h' for help and examples.
vikki@drona-child-1:~$ 

I am going to export the current configuration of the nginx deployment as yaml file and add ports for the nginx deployment to use.
A gif below showing the addition of below 3 lines to the yaml file

        ports:
         - containerPort: 80
         protocol: TCP
vikki@drona-child-1:~$ kubectl get deployment nginx -o yaml > nginx.yaml
vikki@drona-child-1:~$ vim nginx.yaml

vikki_kubernetes_nginx_gif1

Now lets apply the modifed yaml file

vikki@drona-child-1:~$ kubectl apply -f nginx.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.extensions "nginx" configured

Ignore the warning and wait for the pod to goes "Running" status

vikki@drona-child-1:~$ kubectl get deployment,pod
NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/nginx   1         1         1            1           33m

NAME                         READY     STATUS    RESTARTS   AGE
pod/nginx-768979984b-vm74x   1/1       Running   0          29s
vikki@drona-child-1:~$ 

Now try the expose command again.

vikki@drona-child-1:~$ kubectl expose deployment nginx 
service "nginx" exposed
vikki@drona-child-1:~$ 

Get the service status, We can access the nginx service using the cluster-IP

vikki@drona-child-1:~$ kubectl get service nginx 
NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
nginx     ClusterIP   10.104.54.97   <none>        80/TCP    39s
vikki@drona-child-1:~$ 

Get the endpoint details.

vikki@drona-child-1:~$ kubectl get endpoints nginx 
NAME      ENDPOINTS       AGE
nginx     40.168.1.3:80   1m
vikki@drona-child-1:~$ 

To check which node the pod is running use the wide option.

vikki@drona-child-1:~$ kubectl get pod -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-768979984b-vm74x   1/1       Running   0          13m       40.168.1.3   drona-child-3

Now we can go the respective node and access the nginx service using the cluster IP

[root@drona-child-3 ~]# curl 10.104.54.97
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@drona-child-3 ~]# 

Scaling the pods

Now let scale the pod to 3 replica

vikki@drona-child-1:~$ kubectl scale deployment nginx --replicas=3
deployment.extensions "nginx" scaled
vikki@drona-child-1:~$ kubectl get deployment nginx 
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     3         3         3            3           46m

Now there will be 3 endpoint IP each for one pods

vikki@drona-child-1:~$ kubectl get endpoints nginx 
NAME      ENDPOINTS                                   AGE
nginx     40.168.1.3:80,40.168.1.4:80,40.168.1.5:80   11m
vikki@drona-child-1:~$ 

List the pods and verify it.

vikki@drona-child-1:~$ kubectl get pod -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-768979984b-6d27s   1/1       Running   0          1m        40.168.1.5   drona-child-3
nginx-768979984b-mmgbj   1/1       Running   0          1m        40.168.1.4   drona-child-3
nginx-768979984b-vm74x   1/1       Running   0          13m       40.168.1.3   drona-child-3
vikki@drona-child-1:~$ 

Now delete one of the pod and see if it is automatically scalling up to 3

vikki@drona-child-1:~$ kubectl delete pod nginx-768979984b-vm74x 
pod "nginx-768979984b-vm74x" deleted
vikki@drona-child-1:~$ kubectl get pod -o wide
NAME                     READY     STATUS              RESTARTS   AGE       IP           NODE
nginx-768979984b-6d27s   1/1       Running             0          6m        40.168.1.5   drona-child-3
nginx-768979984b-9lddt   0/1       ContainerCreating   0          2s        <none>       drona-child-3
nginx-768979984b-mmgbj   1/1       Running             0          6m        40.168.1.4   drona-child-3
nginx-768979984b-vm74x   0/1       Terminating         0          18m       <none>       drona-child-3
vikki@drona-child-1:~$ 
vikki@drona-child-1:~$ kubectl get pod -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-768979984b-6d27s   1/1       Running   0          6m        40.168.1.5   drona-child-3
nginx-768979984b-9lddt   1/1       Running   0          20s       40.168.1.6   drona-child-3
nginx-768979984b-mmgbj   1/1       Running   0          6m        40.168.1.4   drona-child-3
vikki@drona-child-1:~$ 
vikki@drona-child-1:~$ kubectl get endpoints nginx 
NAME      ENDPOINTS                                   AGE
nginx     40.168.1.4:80,40.168.1.5:80,40.168.1.6:80   17m
vikki@drona-child-1:~$ 

Now if you notice , the cluster IP is same event after deleting a pod. So the service will still be accesible with the same IP.

vikki@drona-child-1:~$ kubectl get service nginx 
NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
nginx     ClusterIP   10.104.54.97   <none>        80/TCP    22m
vikki@drona-child-1:~$ 

Exposing the deployment to external network

Optional : Connect to any of the running container and see the environment to understand which port and which IP the kubernetes is configured


[root@drona-child-3 ~]# docker exec -it 8abe54577bc8 bash
root@nginx-768979984b-9lddt:/# env
HOSTNAME=nginx-768979984b-9lddt
NJS_VERSION=1.15.0.0.2.1-1~stretch
NGINX_VERSION=1.15.0-1~stretch
NGINX_PORT_80_TCP=tcp://10.104.54.97:80
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
NGINX_PORT=tcp://10.104.54.97:80
KUBERNETES_PORT=tcp://10.96.0.1:443
PWD=/
HOME=/root
NGINX_SERVICE_PORT=80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
NGINX_PORT_80_TCP_ADDR=10.104.54.97
NGINX_PORT_80_TCP_PORT=80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
TERM=xterm
NGINX_PORT_80_TCP_PROTO=tcp
SHLVL=1
KUBERNETES_SERVICE_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_SERVICE_HOST=10.96.0.1
NGINX_SERVICE_HOST=10.104.54.97
_=/usr/bin/env

Lets delete the current service and create new service exposing to external network

vikki@drona-child-1:~$ kubectl delete service nginx 
service "nginx" deleted

The option "LoadBalancer" is used to expose to external IP. If you have multiple IP better to specify the IP with ip option.

vikki@drona-child-1:~$ kubectl expose deployment nginx --external-ip=192.168.1.4 --type=LoadBalancer
service "nginx" exposed

Verify the external-IP and the port

vikki@drona-child-1:~$ kubectl get service
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        1d
nginx        LoadBalancer   10.106.182.254   192.168.1.4   80:31480/TCP   5s

Now from external network try to access and see if the default nginx page is acccessible
vikki_nginx_browser

Vignesh Ragupathy

Vignesh Ragupathy

Vignesh Ragupathy is a Linux, opensource enthusiast, an electronics/communication engineer and hobby photographer. He has over 7 years of IT experience and working as a senior engineer in Ericsson.

Read More