Kubernets
简介
kubeadm-单节点-ubuntu
kubeadm-单节点-centos
资源管理
Namespace
Pod
Pod控制器
Pod生命周期
Pod调度
Label
Service
数据存储
安全认证
DashBoard
kubeadm-高可用集群-ubuntu
kubeadm-高可用集群-centos
本文档使用 MrDoc 发布
-
+
首页
kubeadm-高可用集群-ubuntu
**kubeadm是官方社区推出的一个用于快速部署k8s集群的工具,部署需要满足以下几个条件:** - 需要4台及以上机器,操作系统 Ubuntu 22.04.4 LTS - 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多 - 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点 # 集群架构 | 集群角色 | 主机名 | IP | 系统 | | --- | --- | --- | --- | | master虚拟主控 | VIP | 192.168.1.100 | Ubuntu 22.04.4 LTS | | master主控 | master1 | 192.168.1.51 | Ubuntu 22.04.4 LTS | | master主控 | master2 | 192.168.1.52 | Ubuntu 22.04.4 LTS | | master主控 | master3 | 192.168.1.53 | Ubuntu 22.04.4 LTS | | node节点 | node1 | 192.168.1.54 | Ubuntu 22.04.4 LTS | | node节点 | node2 | 192.168.1.55 | Ubuntu 22.04.4 LTS | # 环境初始化 VPC网络配置公网IP ``` ip link add name enp0s5 type dummy ip addr add 公网IP/24 dev enp0s5 ip link set dev enp0s5 up ``` 根据规划设置主机名 ``` hostnamectl set-hostname master1 hostnamectl set-hostname master2 hostnamectl set-hostname master3 hostnamectl set-hostname node1 hostnamectl set-hostname node2 ``` 所有机器添加hosts解析 ``` cat >> /etc/hosts << EOF 192.168.1.100 vip 192.168.1.51 master1 192.168.1.52 master2 192.168.1.53 master3 192.168.1.54 node1 192.168.1.55 node2 EOF ``` 关闭防火墙 ``` systemctl status ufw systemctl stop ufw systemctl disable ufw systemctl status ufw ``` 关闭swap ``` vim /etc/fstable ``` 重启 ``` reboot ``` 时间同步 ``` mv /etc/localtime{,.back} ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime date ``` 添加网桥过滤和地址转发功能 ``` cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF ``` 重新加载配置 ``` sysctl --system ``` 加载网桥过滤模块 ``` modprobe br_netfilter ``` 查看网桥过滤模块是否加载成功 ``` lsmod | grep br_netfilter ``` # 安装docker 添加docker源 ``` apt update ``` ``` apt -y install docker.io ``` 修改cgroups为systemd ``` mkdir -p /etc/docker ``` ``` cat <<EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] } EOF ``` 重新读取 ``` systemctl daemon-reload ``` 启动 ``` systemctl enable docker ``` ``` systemctl restart docker ``` 查看 ``` systemctl status docker ``` # 集群组件安装 安装依赖环境 ``` apt-get -y install ca-certificates curl software-properties-common apt-transport-https ``` ``` curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - ``` ``` tee /etc/apt/sources.list.d/kubernetes.list <<EOF deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF ``` ``` apt update ``` 安装kubeadm、kubelet、kubectl ``` apt -y install kubectl=1.21.0-00 kubelet=1.21.0-00 kubeadm=1.21.0-00 ``` 版本锁定 ``` apt-mark hold kubelet kubeadm kubectl ``` # 集群镜像拉取 推荐镜像版本 ``` kubeadm config images list --kubernetes-version=v1.21.0 ``` 添加K8s镜像源 ``` docker pull k8s.gcr.io/kube-apiserver:v1.21.0 docker pull k8s.gcr.io/kube-controller-manager:v1.21.0 docker pull k8s.gcr.io/kube-scheduler:v1.21.0 docker pull k8s.gcr.io/kube-proxy:v1.21.0 docker pull k8s.gcr.io/pause:3.4.1 docker pull k8s.gcr.io/etcd:3.4.13-0 docker pull k8s.gcr.io/coredns/coredns:v1.8.0 ``` 查看镜像 ``` docker images ``` # 配置master1 master1安装keepalived及相关包组 ``` apt install -y conntrack libseccomp2 libtool keepalived ``` master1写入配置文件(注意网卡名称) ``` cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master1 } vrrp_script check_haproxy { script "/usr/bin/killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 250 advert_int 1 authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab } virtual_ipaddress { 192.168.1.100 } track_script { check_haproxy } } EOF ``` master1启动服务 ``` systemctl enable keepalived.service systemctl restart keepalived.service systemctl status keepalived.service ``` master1检查VIP ``` ip a ``` master1安装haproxy ``` apt install -y haproxy net-tools ``` master1配置haproxy声明后端代理两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口 ``` cat > /etc/haproxy/haproxy.cfg << EOF #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http # Keep HTTP mode for HTTP services log global option httplog option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # kubernetes apiserver frontend which proxys to the backends #--------------------------------------------------------------------- frontend kubernetes-apiserver mode tcp # TCP mode for Kubernetes API server bind *:16443 option tcplog default_backend kubernetes-apiserver #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver mode tcp # TCP mode for Kubernetes API server balance roundrobin server master1 192.168.1.51:6443 check server master2 192.168.1.52:6443 check server master3 192.168.1.53:6443 check #--------------------------------------------------------------------- # collection haproxy statistics message #--------------------------------------------------------------------- listen stats bind *:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats EOF ``` master1启动服务 ``` systemctl enable haproxy systemctl restart haproxy systemctl status haproxy ``` master1检查端口 ``` netstat -lntup|grep haproxy ``` # 配置master2 master2安装keepalived及相关包组 ``` apt install -y conntrack libseccomp2 libtool keepalived ``` master2写入配置文件(注意网卡名称) ``` cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master2 } vrrp_script check_haproxy { script "/usr/bin/killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 200 advert_int 1 authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab } virtual_ipaddress { 192.168.1.100 } track_script { check_haproxy } } EOF ``` master2启动服务 ``` systemctl enable keepalived.service systemctl restart keepalived.service systemctl status keepalived.service ``` master2安装haproxy ``` apt install -y haproxy net-tools ``` master2配置haproxy声明后端代理两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口 ``` cat > /etc/haproxy/haproxy.cfg << EOF #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http # Keep HTTP mode for HTTP services log global option httplog option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # kubernetes apiserver frontend which proxys to the backends #--------------------------------------------------------------------- frontend kubernetes-apiserver mode tcp # TCP mode for Kubernetes API server bind *:16443 option tcplog default_backend kubernetes-apiserver #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver mode tcp # TCP mode for Kubernetes API server balance roundrobin server master1 192.168.1.51:6443 check server master2 192.168.1.52:6443 check server master3 192.168.1.53:6443 check #--------------------------------------------------------------------- # collection haproxy statistics message #--------------------------------------------------------------------- listen stats bind *:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats EOF ``` master2启动服务 ``` systemctl enable haproxy systemctl restart haproxy systemctl status haproxy ``` master2检查端口 ``` netstat -lntup|grep haproxy ``` # 配置master3 master3安装keepalived及相关包组 ``` apt install -y conntrack libseccomp2 libtool keepalived ``` master3写入配置文件(注意网卡名称) ``` cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master3 } vrrp_script check_haproxy { script "/usr/bin/killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab } virtual_ipaddress { 192.168.1.100 } track_script { check_haproxy } } EOF ``` master3启动服务 ``` systemctl enable keepalived.service systemctl restart keepalived.service systemctl status keepalived.service ``` master3安装haproxy ``` apt install -y haproxy net-tools ``` master3配置haproxy声明后端代理两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口 ``` cat > /etc/haproxy/haproxy.cfg << EOF #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http # Keep HTTP mode for HTTP services log global option httplog option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # kubernetes apiserver frontend which proxys to the backends #--------------------------------------------------------------------- frontend kubernetes-apiserver mode tcp # TCP mode for Kubernetes API server bind *:16443 option tcplog default_backend kubernetes-apiserver #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver mode tcp # TCP mode for Kubernetes API server balance roundrobin server master1 192.168.1.51:6443 check server master2 192.168.1.52:6443 check server master3 192.168.1.53:6443 check #--------------------------------------------------------------------- # collection haproxy statistics message #--------------------------------------------------------------------- listen stats bind *:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats EOF ``` master3启动服务 ``` systemctl enable haproxy systemctl restart haproxy systemctl status haproxy ``` master3检查端口 ``` netstat -lntup|grep haproxy ``` # 初始化master1 在vip所在的master上创建kubeadm初始化配置文件(当前为master1) ``` cat > kubeadm-config.yaml << EOF apiServer: certSANs: - vip - master1 - master2 - master3 - 192.168.1.100 - 192.168.1.51 - 192.168.1.52 - 192.168.1.53 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "vip:16443" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd #imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.21.0 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.1.0.0/16 scheduler: {} EOF ``` 执行初始化 ``` kubeadm init --config kubeadm-config.yaml ``` 按照提示配置环境变量 ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 查看集群状态 ``` kubectl get cs ``` 查看node状态 ``` kubectl get nodes ``` 查看网络状态 ``` kubectl get pods -n kube-system ``` # 初始化master2 从master1复制密钥及相关文件到master2 ``` ssh root@192.168.1.52 mkdir -p /etc/kubernetes/pki/etcd ``` ``` scp /etc/kubernetes/admin.conf root@192.168.1.52:/etc/kubernetes ``` ``` scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.1.52:/etc/kubernetes/pki ``` ``` scp /etc/kubernetes/pki/etcd/ca.* root@192.168.1.52:/etc/kubernetes/pki/etcd ``` 设置VIP转发 ``` iptables -t nat -A OUTPUT -d 192.168.1.100 -j DNAT --to-destination 192.168.1.51 ``` master2加入集群,join使用带`--control-plane`参数的命令 ``` kubeadm join vip:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba --control-plane ``` 检查状态 ``` kubectl get node kubectl get pods --all-namespaces ``` # 初始化master3 从master1复制密钥及相关文件到master3 ``` ssh root@192.168.1.53 mkdir -p /etc/kubernetes/pki/etcd ``` ``` scp /etc/kubernetes/admin.conf root@192.168.1.53:/etc/kubernetes ``` ``` scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.1.53:/etc/kubernetes/pki ``` ``` scp /etc/kubernetes/pki/etcd/ca.* root@192.168.1.53:/etc/kubernetes/pki/etcd ``` 设置VIP转发 ``` iptables -t nat -A OUTPUT -d 192.168.1.100 -j DNAT --to-destination 192.168.1.51 ``` master3加入集群,join使用带`--control-plane`参数的命令 ``` kubeadm join vip:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba --control-plane ``` 检查状态 ``` kubectl get node kubectl get pods --all-namespaces ``` # 初始化node 在master节点查看加入令牌 ``` kubeadm token create --print-join-command --ttl 0 ``` 设置VIP转发(公网设置) ``` iptables -t nat -A OUTPUT -d 192.168.1.100 -j DNAT --to-destination 192.168.1.51 ``` 在node上执行,join使用不带`--control-plane`参数的命令 ``` kubeadm join vip:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba ``` 检查状态 ``` kubectl get node kubectl get pods --all-namespaces ``` # 初始化网络 K8s支持多种网络插件如flannel、calico、canal等(版本对应版本) 安装flannel ``` kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml ``` 删除flannel ``` kubectl delete -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml ``` 安装calico ``` kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.21/manifests/calico.yaml ``` 删除calico ``` kubectl delete -f https://projectcalico.docs.tigera.io/archive/v3.21/manifests/calico.yaml ``` 查看网络状态 ``` kubectl get pods -n kube-system ``` # 集群测试 在Kubernetes集群中创建一个nginx的pod容器 ``` kubectl create deployment nginx --image=nginx ``` 在Kubernetes集群中创建一个svc暴漏端口 ``` kubectl expose deployment nginx --port=80 --type=NodePort ``` 查看,访问测试(NodeIP:Port) ``` kubectl get pod,svc ``` # 集群重置 重置集群 ```asp kubeadm reset ``` 删除配置文件 ```asp rm -rf /root/.kube/ rm -rf /etc/kubernetes/ rm -rf /var/lib/kubelet/ rm -rf /var/lib/dockershim rm -rf /var/run/kubernetes rm -rf /var/lib/cni rm -rf /var/lib/etcd rm -rf /etc/cni/net.d ``` 清理防火墙 ```asp iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ``` # 集群扩容 在 Kubernetes 集群中,控制平面节点(如 etcd)通常建议使用奇数个节点,这是因为 etcd 使用 Raft 协议来实现分布式共识,要求多数投票才能达成一致。 - 故障容忍:奇数个节点可以在一个节点故障时保持多数。例如,3 个节点的集群可以容忍 1 个节点故障,5 个节点可以容忍 2 个。 - 选举效率:奇数个节点有助于避免选举时出现平票的情况。 - 常见配置:3 节点(适合小型生产环境);5 节点(适合中型到大型生产环境,提供更高的容错能力) 新master节点安装keepalived、haproxy及相关包组 ``` apt install -y conntrack libseccomp2 libtool keepalived haproxy net-tools ``` 新master节点写入配置文件(注意网卡名称和优先级) ``` cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id k8s } vrrp_script check_haproxy { script "killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP interface enp0s5 virtual_router_id 51 priority 50 advert_int 1 authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab } virtual_ipaddress { 154.12.24.229 } track_script { check_haproxy } } EOF ``` 新master节点启动服务 ``` systemctl enable keepalived.service systemctl restart keepalived.service systemctl status keepalived.service ``` 所有master节点修改haproxy配置(添加新master节点信息) ``` cat > /etc/haproxy/haproxy.cfg << EOF #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http # Keep HTTP mode for HTTP services log global option httplog option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # kubernetes apiserver frontend which proxys to the backends #--------------------------------------------------------------------- frontend kubernetes-apiserver mode tcp # TCP mode for Kubernetes API server bind *:16443 option tcplog default_backend kubernetes-apiserver #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver mode tcp # TCP mode for Kubernetes API server balance roundrobin server master1 192.168.1.51:6443 check server master2 192.168.1.52:6443 check server master3 192.168.1.53:6443 check server master4 192.168.1.54:6443 check #--------------------------------------------------------------------- # collection haproxy statistics message #--------------------------------------------------------------------- listen stats bind *:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats EOF ``` 所有master节点启动服务 ``` systemctl enable haproxy systemctl restart haproxy systemctl status haproxy ``` 所有master节点检查端口 ``` netstat -lntup|grep haproxy ``` 查看当前k8s的证书内容,查看发现当前输出内容不包含要添加的新Master节点(在老master节点即可) ``` openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout ``` 删除备份现有的证书和密钥 ``` mv /etc/kubernetes/pki/apiserver.* /mnt/ ``` 更新kubeadm-config.yaml添加新master节点的域名和IP ``` vim kubeadm-config.yaml ``` 然后重新生成 API 服务器证书 ``` kubeadm init phase certs apiserver --config kubeadm-config.yaml ``` 再次查看已经包含添加的新Master节点信息 ``` openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout ``` 新生成的证书分发给所有老Master节点 ``` scp /etc/kubernetes/pki/apiserver.* root@master2:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/apiserver.* root@master3:/etc/kubernetes/pki/ ``` 所有Master节点重启kubelet服务 ``` systemctl restart kubelet ``` 从master1复制密钥及相关文件到新master ``` ssh root@master4 mkdir -p /etc/kubernetes/pki/etcd ``` ``` scp /etc/kubernetes/admin.conf root@master4:/etc/kubernetes ``` ``` scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master4:/etc/kubernetes/pki ``` ``` scp /etc/kubernetes/pki/etcd/ca.* root@master4:/etc/kubernetes/pki/etcd ``` 新master节点设置iptables转发VIP ``` iptables -t nat -A OUTPUT -d VIP -j DNAT --to-destination VIP所在机器的公网IP ``` 新master节点加入集群 ``` kubeadm join vip:16443 --token u9ca29.0he4ozhlb7rvqgju --discovery-token-ca-cert-hash sha256:241dadc934f6c918009506a3b52eaf4b6d21cccb203b7c6e09c8e85ef2254d89 --control-plane ``` # 问题汇总 **问题描述1:** `公网机器无法ping通VIP` **问题分析1:** `公网服务器跨网段IP无法直接ping通` **问题解决1:** `使用iptables链对健康的master节点进行漂移后的VIP转发,编写检测脚本` ``` #!/bin/bash ORIGINAL="192.168.1.100" TARGET1="192.168.1.51" TARGET2="192.168.1.52" TARGET3="192.168.1.53" # 检测第一个目标是否可达 if ping -c 1 $TARGET1 &> /dev/null; then # 如果可达,设置重定向到第一个目标 iptables -t nat -A OUTPUT -d $ORIGINAL -j DNAT --to-destination $TARGET1 else # 检测第二个目标是否可达 if ping -c 1 $TARGET2 &> /dev/null; then # 如果可达,设置重定向到第二个目标 iptables -t nat -R OUTPUT 1 -d $ORIGINAL -j DNAT --to-destination $TARGET2 else # 如果不可达,设置重定向到第三个目标 iptables -t nat -R OUTPUT 1 -d $ORIGINAL -j DNAT --to-destination $TARGET3 fi fi ``` **问题描述2:** `查看cs状态提示如下报错` ``` kubectl get cs ``` ``` Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"} ``` **问题分析2:** `kube-controller-manager.yaml文件和kube-scheduler.yaml设置的默认端口为0与实际不一致导致错误` **问题解决2:** `修改配置文件` ``` vim /etc/kubernetes/manifests/kube-controller-manager.yaml ``` ``` ... - --controllers=*,bootstrapsigner,tokencleaner - --kubeconfig=/etc/kubernetes/controller-manager.conf - --leader-elect=true # - --port=0 (注释该行) - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --root-ca-file=/etc/kubernetes/pki/ca.crt ... ``` ``` vim /etc/kubernetes/manifests/kube-scheduler.yaml ``` ``` ... - --kubeconfig=/etc/kubernetes/scheduler.conf - --leader-elect=true # - --port=0 (注释该行) image: k8s.gcr.io/kube-scheduler:v1.21.0 imagePullPolicy: IfNotPresent ... ``` `再所有master节点重启kuebelet` ``` systemctl restart kubelet ``` `再次查看健康状态已经恢复` ``` kubectl get cs ``` ``` Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} ``` 添加虚拟网卡 ``` ip link add name eth0 type dummy ip addr add 192.168.1.100/24 dev eth0 ip link set dev eth0 up ```
done
2024年11月10日 16:29
转发文档
收藏文档
上一篇
下一篇
手机扫码
复制链接
手机扫一扫转发分享
复制链接
Markdown文件
分享
链接
类型
密码
更新密码