安装部署K8S集群

首页 / 🐧Linux / 正文

背景:自动动手搭建K8s集群。


环境准备:

  • 两台CentOS服务器,一台用作Master,一台用作node。

1、kubernetes安装介绍

1.1安装kubernetes方法

  • 方法1:使用kubeadm 安装kubernetes
    优点:你只要安装kubeadm即可;kubeadm会帮你自动部署安装K8S集群;如:初始化K8S集群、配置各个插件的证书认证、部署集群网络等。安装简易。
    缺点:不是自己一步一步安装,可能对K8S的理解不会那么深;并且有那一部分有问题,自己不好修正。
  • 方法2:二进制安装部署
    优点:K8S集群所有东西,都由自己一手安装搭建;清晰明了,更加深刻细节的掌握K8S;哪里出错便于快速查找验证。
    缺点:安装较为繁琐麻烦,且易于出错。

2、安装kubernetes先决条件

2.1 组件版本

docker 20.10.6
kubeadm 1.19.5
kubelet 1.19.5
kubectl 1.19.5

2.2 集群机器

kube-master:192.168.10.224
kube-node1:192.168.10.222

2.3 主机名

在每台机器上设置永久主机名称,然后重新登录。

hostnamectl set-hostname master
hostnamectl set-hostname node1 

2.4 添加主机名和 IP 的对应关系:

在每台机器上配置/etc/hosts文件

$ cat << eof > /etc/hosts
192.168.10.224 master
192.168.10.222 node1
eof

2.5 同步系统时间

在每台机器上配置时间同步

$ yum -y install ntpdate
$ sudo ntpdate cn.pool.ntp.org

2.6 关闭防火墙

在每台机器上关闭防火墙:

① 关闭服务,并设为开机不自启

sudo systemctl stop firewalld; systemctl disable firewalld

② 清空防火墙规则

$ sudo iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat
$ sudo iptables -P FORWARD ACCEPT 

2.7 关闭 swap 分区

1、如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为false 来忽略 swap on),故需要在每台机器上关闭 swap 分区:

$ sudo swapoff -a

2、为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目:

$ sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

  

2.8 关闭 SELinux

1、关闭 SELinux,否则后续 K8S 挂载目录时可能报错 Permission denied :

 setenforce 0

2、修改配置文件,永久生效;

sed -i s#SELINUX=enforcing#SELINUX=disabled# /etc/selinux/config 

3、使用kubeadm安装K8S集群

3.1 认识kubeadm

gitlab项目地址:https://github.com/kubernetes/kubeadm
kubeadm 幕后发生的工作内容:https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md
以下操作在3个服务器上,都要执行!

3.2 配置安装源

3.2.1 配置docker-ce 源信息

  1. 添加docker-ce 源信息
[root@master ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
  1. 修改docker-ce 源
[root@master ~]# sed -i 's@download.docker.com@mirrors.tuna.tsinghua.edu.cn/docker-ce@g' /etc/yum.repos.d/docker-ce.repo

3.2.2 配置kubernetes仓库

[root@node2 ~]# cat << eof > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enable=1
eof

  

3.2.3 更新yum仓库

[root@master ~]# yum.repos.d]# yum clean all
[root@master ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
repo id                                           repo name                                                              status
!base/7/x86_64                                    CentOS-7 - Base - mirrors.aliyun.com                                   10,072
!docker-ce-stable/7/x86_64                        Docker CE Stable - x86_64                                                 112
!epel/x86_64                                      Extra Packages for Enterprise Linux 7 - x86_64                         13,592
!extras/7/x86_64                                  CentOS-7 - Extras - mirrors.aliyun.com                                    476
!kubernetes                                       Kubernetes Repo                                                           666
!updates/7/x86_64                                 CentOS-7 - Updates - mirrors.aliyun.com                                 2,189
repolist: 27,107

  

3.3 安装docker、kubelet、kubeadm、kubectl

kubelet: 负责管理pods和它们上面的容器,维护容器的生命周期
kubeadm:安装K8S工具
kubectl: K8S命令行工具
3.4.1 安装

安装必要依赖

yum install -y yum-utils device-mapper-persistent-data lvm2
yum -y install docker-ce #下载稳定版本
yum install -y kubelet-1.18.5 kubeadm-1.18.5 kubectl-1.18.5
  

3.4 启动服务

3.4.1 配置启动docker服务

  1. 添加加速器到配置文件
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF

  1. 启动服务
systemctl daemon-reload; systemctl start docker; systemctl enable docker.service

(3)打开iptables内生的桥接相关功能,已经默认开启了,没开启的自行开启

[root@node1 ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
[root@node1 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

3.4.2 配置启动kubelet服务
(1)修改配置文件

[root@master ~]# cat << eof > /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
KUBE_PROXY=MODE=ipvs
eof

  

4、初始化kubernetes master节点

在master服务器上执行,完成以下所有操作

4.1 使用kubeadm init初始化

  1. 下载镜像脚本
cat << 'eof' > get-k8s-images.sh
#!/bin/bash
# Script For Quick Pull K8S Docker Images
# by Hellxz Zhang <hellxz001@foxmail.com>

KUBE_VERSION=v1.19.5
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.7.0
ETCD_VERSION=3.4.13-0

# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION  k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION

# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
eof

添加脚本执行权限并执行脚本

chmod +x get-k8s-images.sh
./get-k8s-images.sh  
  1. 使用kubeadm init 进行初始化(需要进行很多操作,所以要等待一段时间)
[root@master ~]# kubeadm init --kubernetes-version=v1.19.5 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

`释:
--kubernetes-version:指定kubeadm版本;我这里下载的时候kubeadm最高时1.19.5版本
--pod-network-cidr:指定pod所属网络
--service-cidr:指定service网段
--ignore-preflight-errors=Swap/all:忽略 swap/所有 报错
注:

  因为kubeadm需要拉取必要的镜像,这些镜像需要“*”;所以可以先在docker hub或其他镜像仓库拉取kube-proxy、kube-scheduler、kube-apiserver、kube-controller-manager、etcd、pause镜像;并加上 --ignore-preflight-errors=all 忽略所有报错即可。

  1. 初始化命令成功后,创建.kube目录
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.224:6443 --token nk4otp.ywpil9hdfcxf56jy \
    --discovery-token-ca-cert-hash sha256:ec0998b50bb079502698acd0aa171c8706a44fa65cc63500ff083105af6feb28 
  1. 设置开机自启动

    因为K8S集群还未初始化,所以kubelet 服务启动不成功,下面初始化完成,再启动即可。

systemctl enable kubelet.service; systemctl start kubelet.service

4.2 验证

  1. 拉取了必须的镜像
[root@master ~]# docker image ls
REPOSITORY                                          TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy-amd64                         v1.11.1             d5c25579d0ff        6 months ago        97.8 MB
k8s.gcr.io/kube-scheduler-amd64                     v1.11.1             272b3a60cd68        6 months ago        56.8 MB
k8s.gcr.io/kube-apiserver-amd64                     v1.11.1             816332bd9d11        6 months ago        187 MB
k8s.gcr.io/kube-controller-manager-amd64            v1.11.1             52096ee87d0e        6 months ago        155 MB
k8s.gcr.io/etcd-amd64                               3.2.18              b8df3b177be2        9 months ago        219 MB
k8s.gcr.io/pause                                    3.1                 da86e6ba6ca1        13 months ago       742 kB

  

  1. 开启了kube-apiserver 的6443端口
[root@master ~]# ss -nutlp
tcp   LISTEN     0      128                   :::6443                              :::*                   users:(("kube-apiserver",pid=1609,fd=3))

  

  1. 使用kubectl命令查询集群信息

查询组件状态信息

root@master1:~$ sudo kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0               Healthy     {"health":"true"}

遇到以上情况修改配置文件

vim /etc/kubernetes/manifests/kube-controller-manager.yaml
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --clusteH y *r-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --port=0 ########################## 删除或者注释这行  #########
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
root@master1:~$  vim /etc/kubernetes/manifests/kube-scheduler.yaml
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --port=0   ########### 删除或者注释这行 #################

再次查询组件状态信息

[root@master ~]# kubectl get cs   
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

查询集群节点信息(因为还没有部署好flannel,所以节点显示为NotReady)

[root@master ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    NotReady  master    13m       v1.19.5

查询名称空间,默认

kubectl get ns
NAME            STATUS    AGE
default         Active    13m
kube-public     Active    13m
kube-system     Active    13m

4.3 部署网络插件flannel

  1. 直接使用kubectl 执行gitlab上的flannel 部署文件
[root@master ~]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

  

  1. 会看到下载好的flannel 的镜像
[root@master ~]# docker image ls |grep flannel
quay.io/coreos/flannel               v0.14.0    8522d622299c   2 days ago      67.9MB

  

  1. 验证

① master 节点已经Ready

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   53m     v1.19.5

② 查询kube-system名称空间下

[root@master ~]# kubectl get pods -n kube-system |grep flannel
kube-flannel-ds-ggl4w            1/1     Running   0          7m27s
kube-flannel-ds-h4qqn            1/1     Running   0          13m

  

5、初始化kubernetes node节点

在node 服务器上执行,完成以下所有操作

5.1 使用kubeadm join 初始化

  1. 初始化node节点;下边的命令是master初始化完成后,下边有提示的操作命令
[root@node1 ~]# kubeadm join 192.168.10.224:6443 --token nk4otp.ywpil9hdfcxf56jy --discovery-token-ca-cert-hash sha256:ec0998b50bb079502698acd0aa171c8706a44fa65cc63500ff083105af6feb28  --ignore-preflight-errors=Swap

5.2 验证集群是否初始化成功

  1. 查询2个节点的镜像
[root@node1 ~]# docker image ls  
docker images  
REPOSITORY                           TAG        IMAGE ID       CREATED         SIZE
quay.io/coreos/flannel               v0.14.0    8522d622299c   2 days ago      67.9MB
k8s.gcr.io/kube-proxy                v1.19.5    6e5666d85a31   5 months ago    118MB
k8s.gcr.io/kube-scheduler            v1.19.5    350a602e5310   5 months ago    45.6MB
k8s.gcr.io/kube-controller-manager   v1.19.5    f196e958af67   5 months ago    111MB
k8s.gcr.io/kube-apiserver            v1.19.5    72efb76839e7   5 months ago    119MB
k8s.gcr.io/etcd                      3.4.13-0   0369cf4303ff   8 months ago    253MB
k8s.gcr.io/coredns                   1.7.0      bfe3a36ebd25   11 months ago   45.2MB
k8s.gcr.io/pause                     3.2        80d28bedfe5d   15 months ago   683kB

  

  1. 等2个从节点上下载好镜像,初始化完成,再在主上查询验证
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   53m     v1.19.5
node1    Ready    <none>   3m25s   v1.19.5

  

  1. 在主节点查询kube-system名称空间下关于node节点pod的信息
[root@master ~]#kubectl get pods -n kube-system -o wide |grep node
kube-flannel-ds-ggl4w            1/1     Running   0          2m55s   192.168.10.222   node1    <none>           <none>
kube-proxy-tjsdm                 1/1     Running   0          2m55s   192.168.10.222   node1    <none>           <none>
k8s
打赏
文章目录