The article is to document the steps I took to install Kubernetes cluster on Centos 7.3 with kubeadm.
Prerequisites
System Configuration
We suppose we have four servers ready, one as k8s master and the other three as k8s nodes.
Configure local DNS in /etc/hosts
. Map the IP address with host names.
Environment Preparation
We should have at least two servers with Centos 7.3 pre-installed and keep them in the same subnet.
Optionally, setup proxy if you work behind the corporate network as Kubeadm uses the system proxy to download components. Put the following settings in the $HOME/.bashrc
file. Be mindful to put the master host’s IP address in the no_proxy list.
And check your proxy settings:
Install kubeadm and kubelet on each of your hosts
Add k8s repo to the yum source list. We suppose you run the command as root. For non-root users, please wrap the command with sudo bash -c '<command_to_run>'
Install kubeadm and kubelet on each hosts.
By default, the following command installs the latest kubelet and kubeadm. If you want to install a specific version, you may check the versions.
And install the specific version ( eg. v1.7.5 )
Install Docker on each of your hosts
For the time being, Docker version 1.12.x is still the preferred and verified version that Kubernetes officially supported according to it doc. But this thread says that they will add support for Docker 1.13 very soon.
We will install Docker 1.12 for now. Use the following command to set up the repository.
Make yum cache and check possible Docker 1.12.x versions
Intall and start Docker service
Verify if your Docker cgroup driver matches the kubelet config:
The default cgroup driver in kubelet config is systemd
. If Docker’s cgroup driver is not systemd
but cgroupfs
. Update the cgroup driver in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
to KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs
Further, if your Docker runs behind corporate network, set up the proxy in the Docker config:
Then reload the config:
Initialize Master
On the master node (load balancer), if you run as root, do
If you run as a normal user, do
If --apiserver-advertise-address
is not specified, it auto-detects the network interface to advertise the master. Better to set the argument if there are more than one network interface.--pod-network-cidr
is to specify the virtual IP range for the third party network plugin. We use flannel as our network plugin here.
Set --use-kubernetes-version
if you want to use specific Kubernetes version.
To start using your cluster, you need to run (as a regular user):
By default, your cluster will not schedule pods on the master for security reasons. Expand your load capacity if you want to schedule pods on your master.
Install Flannel Pod Network Plugin
A pod network add-on is supposed to be installed in order that pods can communicate with each other. Run:
If there are more than one NIC, refer to the flannel issue 39701.
Next, configure Docker with Flannel IP range and settings:
Reload the config:
Check the status of flannel pods and make sure that it is in Running
state:
Join Nodes to Cluster
Get the cluster token on master:
Run the commands below on each of the nodes:
Replace e5e6d6.6710059ca7130394
with the token got from kubeadm
command.
And check whether nodes joins the cluster successfully.
Install Dashboard Add-on
Create the dashboard pod :
Since Kubernetes v1.6, its API server uses RBAC strategy. The kubernetes-dashboard.yaml
does not define an valid ServiceAccount. Create file dashboard-rbac.yaml
and bind account system:serviceaccount:kube-system:default
with role ClusterRole cluster-admin
:
Define the RBAC rules to the pod and check pod state with kubectl get po --all-namespace
command after that
|
|
Configure kubernetes-dashboard
service to use NodePort:
Then get the NodePort.
32202
in the output is the NodePort. You can visit the dashboard by http://<master-ip>:<node_port>
now. In our case, the url is http://192.168.1.102:32202
.
Tear Down
Firstly, drain the nodes on the master or wherever credential is configured. It does a graceful termination and marks the node as unschedulable.
Then on the node to be removed, remove all the configuration files and settings
Diagnose
Check services and pods status.
kube-system
is the default namespace for system-level pods. You may also pass other specific namespaces. Use--all-namespaces
to check all namespaces1$ kubectl get po,svc -n kube-systemThis is how the output looks like:
123456789101112131415NAME READY STATUS RESTARTS AGEpo/etcd-loadbalancer 1/1 Running 0 1dpo/kube-apiserver-loadbalancer 1/1 Running 0 1dpo/kube-controller-manager-loadbalancer 1/1 Running 0 1dpo/kube-dns-2425271678-zj91n 3/3 Running 0 1dpo/kube-flannel-ds-w9dvz 2/2 Running 0 1dpo/kube-flannel-ds-zn6c4 2/2 Running 1 1dpo/kube-proxy-m6nvj 1/1 Running 0 1dpo/kube-proxy-w92kx 1/1 Running 0 1dpo/kube-scheduler-loadbalancer 1/1 Running 0 1dpo/kubernetes-dashboard-3313488171-tkdtz 1/1 Running 0 1dNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEsvc/kube-dns 10.96.0.10 <none> 53/UDP,53/TCP 1dsvc/kubernetes-dashboard 10.102.129.68 <nodes> 80:32202/TCP 1dCheck pods logs. Get pod name from the command above (eg.
kubernetes-dashboard-3313488171-tkdtz
). Use-c <container_name>
if there are more than one containers running in the pod.1$ kubectl logs <pod_name> -f -n kube-systemRun commands in the container. Use
-c <container_name>
if there are more than one containers running in the pod.
Run a single command:1$ kubectl exec <pod_name> -n <namespace> <command_ to_run>Enter the container’s shell:
1$ kubectl exec -it <pod_name> -n <namespace> -- /bin/bashCheck Docker logs
1$ sudo journalctl -u docker.service -fCheck kubelet logs
1$ sudo journalctl -u kubelet.service -f
References: