Kubernetes集群部署
文章目录
- Kubernetes集群部署
- 资源列表
- 基础环境
- 一、环境准备(三台主机都要执行)
- 1.1、绑定hosts
- 1.2、安装常用软件
- 1.3、关闭交换分区
- 1.4、时间同步
- 二、Docker环境部署(三台主机都要执行)
- 2.1、安装依赖包
- 2.2、添加YUM软件源
- 2.3、更新YUM源缓存并安装Docker
- 2.4、启动Docker
- 2.5、配置阿里云加速器
- 2.6、内核优化
- 三、部署Kubernetes集群
- 3.1、三台主机配置Kubernetes的YUM源
- 3.2、三台主机安装Kubelet、Kubeadm和Kubectl
- 3.3、三台主机设置Kubelet开机启动
- 3.4、master节点生成初始化配置文件
- 3.5、master节点修改初始化配置文件
- 3.6、master节点拉取所需镜像
- 3.7、master节点初始化k8s-master
- 3.8、master节点复制k8s认证文件到用户的home目录
- 3.9、node节点加入集群
- 3.10、在k8s-master查看节点状态
- 四、安装flannel网络插件
- 4.1、拉取flannel镜像
- 4.2、安装flannel网络插件
- 4.3、查看节点状态
资源列表
操作系统 | 配置 | 主机名 | IP | 所需软件 |
---|
CentOS 7.9 | 2C4G | k8s-master | 192.168.93.101 | Docker Ce、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、Etcd、kube-proxy |
CentOS 7.9 | 2C4G | k8s-node01 | 192.168.93.102 | Docker CE、kubectl、kube-proxy、Flnnel |
CentOS 7.9 | 2C4G | k8s-node02 | 192.168.93.103 | Docker CE、kubectl、kube-proxy、Flnnel |
基础环境
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
一、环境准备(三台主机都要执行)
- 在正式开始部署kubernetes集群之前,先要进行如下准备工作。基础环境相关配置操作,在三台主机k8s-master、k8s-node01、k8s-node02上都需要执行,下面以k8s-master主机为例进行操作演示
1.1、绑定hosts
[root@k8s-master ~]
192.168.93.101 k8s-master
192.168.93.102 k8s-node01
192.168.93.103 k8s-node02
EOF
1.2、安装常用软件
[root@k8s-master ~]
1.3、关闭交换分区
[root@k8s-master ~]
[root@k8s-master ~]
1.4、时间同步
[root@k8s-master ~]
[root@k8s-master ~]
二、Docker环境部署(三台主机都要执行)
- 完成基础环境准备之后,在三台主机上分别部署Docker环境,因为kubernetes对容器的编排需要Docker的支持。以k8s-master主机为例进行操作演示,首先安装一些Docker的依赖包,然后将Docker的YUM源设置成国内地址,最后通过YUM方式安装Docker并启动
2.1、安装依赖包
- 在正式安装Docker之前,需要先将Docker运行所需的一些依赖包安装好
[root@k8s-master ~]
2.2、添加YUM软件源
- 使用YUM方式安装Docker时,推荐阿里云的YUM源,因为国外的DockerYUM源不能用了
[root@k8s-master ~]
2.3、更新YUM源缓存并安装Docker
- 安装kubernetes的版本和docker的版本进行在网上找一下看看什么版本相互适配,本次已经找好了版本,跟着安装即可
[root@k8s-master ~]
[root@k8s-master ~]
[root@k8s-master ~]
2.4、启动Docker
[root@k8s-master ~]
[root@k8s-master ~]
2.5、配置阿里云加速器
- 将镜像加速器地址直接写入/etc/docker/daemon.json文件内,如果文件不存在,可直接新建文件并保存。通过该扩展名可以看出,daemon.json的内容必须符合json格式,书写时要注意
[root@k8s-master ~]
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://u9noolvn.mirror.aliyuncs.com"]
}
[root@k8s-master ~]
[root@k8s-master ~]
2.6、内核优化
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[root@k8s-master ~]
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
[root@k8s-master ~]
[root@k8s-master ~]
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
三、部署Kubernetes集群
- 准备好基础环境和Docker环境,下面就开始通过kubeadm来部署kubernetes集群
3.1、三台主机配置Kubernetes的YUM源
- 这里使用的Kubernetes源同样推荐使用阿里云的
[root@k8s-master ~]
[k8s]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@k8s-master ~]
3.2、三台主机安装Kubelet、Kubeadm和Kubectl
- yum list kubectl –showduplicates | sort -r 列出k8s版本信息
- kubectl:命令行管理工具、kubeadm:安装K8S集群工具、kubelet管理容器工具
[root@k8s-master ~]
[root@k8s-master ~]
Kubernetes v1.18.0
3.3、三台主机设置Kubelet开机启动
- 操作节点:k8s-master、k8s-node01、k8s-node02
- kubelet刚安装好后,通过systemctl start kubelet方式是无法启动的,需要加入节点或初始化为master后才可以启动成功
[root@k8s-master ~]
3.4、master节点生成初始化配置文件
- Kubeadm提供了很多配置项,kubeadm配置在kubernetes集群中是存储在ConfigMap中的,也可将这些配置写入配置文件,方便管理复杂的配置项。kubeadm配置内容是通过kubeadm config命令写入配置文件的
[root@k8s-master ~]
W0615 08:50:40.154637 10202 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm config view:查看当前集群中的配置值
kubeadm config print join-defaults:输出kubeadm join默认参数文件的内容
kubeadm config images list:列出所需的镜像列表
kubeadm config images pull:拉取镜像到本地
kubeadm config upload from-flags:由配置参数生成ConfigMap
3.5、master节点修改初始化配置文件
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.93.101
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16
scheduler: {}
3.6、master节点拉取所需镜像
[root@k8s-master ~]
W0615 09:00:11.145158 10221 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.aliyuncs.com/google_containers/coredns:1.6.7
[root@k8s-master ~]
W0615 09:00:38.350044 10227 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.7
[root@k8s-master ~]
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 4 years ago 117MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 4 years ago 173MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 4 years ago 162MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 4 years ago 95.3MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 4 years ago 683kB
registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 4 years ago 43.8MB
registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 4 years ago 288MB
3.7、master节点初始化k8s-master
- kubeadm通过初始化安装是不包括网络插件的,也就是说初始化之后不具备相关网络功能的,比如k8s-master节点上查看信息都是“Not Ready”状态、Pod的CoreDNS无法提供服务等
- 若初始化失败执行:kubeadm reset、rm -rf $HOME/.kube、/etc/kubernetes/、/var/lib/etcd/
- 初始化操作本次实验提供两个办法,但是使用的是办法一进行的k8s-master节点的初始化操作
3.7.1、办法一
[root@k8s-master ~]
W0615 09:09:02.736872 10425 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.93.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0615 09:09:04.854021 10425 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0615 09:09:04.854832 10425 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.502401 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.93.101:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082
3.7.2、办法二
[root@k8s-master ~]
3.8、master节点复制k8s认证文件到用户的home目录
[root@k8s-master ~]
[root@k8s-master ~]
[root@k8s-master ~]
3.9、node节点加入集群
- 直接把k8s-master节点初始化之后的最后回显的token复制粘贴到node节点回车即可,无须做任何配置
[root@k8s-node01 ~]
> --discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082
W0615 09:17:57.223102 9376 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node02 ~]
> --discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082
W0615 09:18:01.227038 19243 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
3.10、在k8s-master查看节点状态
- 前面已经提到了,在初始化k8s-master时并没有网络相关的配置,所以无法跟node节点通信,因此状态都是“Not Ready”。但是通过kubeadm join加入的node节点已经在k8s-master上可以看到
[root@k8s-master ~]
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 11m v1.18.0
k8s-node01 NotReady <none> 2m22s v1.18.0
k8s-node02 NotReady <none> 2m18s v1.18.0
四、安装flannel网络插件
- flannel是一个轻量级的网络插件,基于虚拟网络的方式,使用了多种后端实现,如基于Overlay的VXLAN和基于Host-Gateway的方式。它创建了一个覆盖整个集群的群集网络,使得Pod可以跨节点通信
4.1、拉取flannel镜像
- 需要提前使用魔法拉取flannel镜像,因为国内的aliyun镜像仓库没有flannel镜像
- 如果下载不下来,或者网络插件文件没有,可以评论或者私信免费提供
[root@k8s-master ~]
[root@k8s-master ~]
[root@k8s-master ~]
flannel/flannel v0.21.5 a6c0cb5dbd21 13 months ago 68.9MB
flannel/flannel-cni-plugin v1.1.2 7a2dcab94698 18 months ago 7.97MB
4.2、安装flannel网络插件
[root@k8s-master ~]
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
4.3、查看节点状态
[root@k8s-master ~]
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 23m v1.18.0
k8s-node01 Ready <none> 14m v1.18.0
k8s-node02 Ready <none> 14m v1.18.0
[root@k8s-master ~]
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-25bzd 1/1 Running 0 23m
coredns-7ff77c879f-wp885 1/1 Running 0 23m
etcd-k8s-master 1/1 Running 0 24m
kube-apiserver-k8s-master 1/1 Running 0 24m
kube-controller-manager-k8s-master 1/1 Running 0 24m
kube-proxy-2tphl 1/1 Running 0 15m
kube-proxy-hqppj 1/1 Running 0 15m
kube-proxy-rfxw2 1/1 Running 0 23m
kube-scheduler-k8s-master 1/1 Running 0 24m
[root@k8s-master ~]
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-h727x 1/1 Running 0 77s
kube-flannel kube-flannel-ds-kbztr 1/1 Running 0 77s
kube-flannel kube-flannel-ds-nw9pr 1/1 Running 0 77s
kube-system coredns-7ff77c879f-25bzd 1/1 Running 0 24m
kube-system coredns-7ff77c879f-wp885 1/1 Running 0 24m
kube-system etcd-k8s-master 1/1 Running 0 24m
kube-system kube-apiserver-k8s-master 1/1 Running 0 24m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 24m
kube-system kube-proxy-2tphl 1/1 Running 0 15m
kube-system kube-proxy-hqppj 1/1 Running 0 15m
kube-system kube-proxy-rfxw2 1/1 Running 0 24m
kube-system kube-scheduler-k8s-master 1/1 Running 0 24m