K8s集群安装记录及证书更新(1.20.6 docker版)

2023-10-27 / 0 评论 / 341 阅读
温馨提示:
本文最后更新于 2023-10-27,已超过半年没有更新,若内容或图片失效,请留言反馈。

1、升级系统到centos7.9.2009
yum update -y
2、修改主机名以示区分(不要带特别符号,最好是字母+数字即可)
hostnamectl set-hostname XXXX1 &&/bin/bash
[root@centos7demo ~]# hostnamectl set-hostname node1&&/bin/bash

3、修改网卡IP
[root@master1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
注:/etc/sysconfig/network-scripts/ifcfg-ens33文件里的配置说明:
NAME=ens33 #网卡名字,跟DEVICE名字保持一致即可
DEVICE=ens33 #网卡设备名,大家ip addr可看到自己的这个网卡设备名,每个人的机器可能这个名字不一样,需要写自己的
BOOTPROTO=static #static表示静态ip地址
ONBOOT=yes #开机自启动网络,必须是yes
IPADDR=192.168.40.180 #ip地址,需要跟自己电脑所在网段一致
NETMASK=255.255.255.0 #子网掩码,需要跟自己电脑所在网段一致
GATEWAY=192.168.40.2 #网关,在自己电脑打开cmd,输入ipconfig /all可看到
DNS1=192.168.40.2 #DNS,在自己电脑打开cmd,输入ipconfig /all可看到 
4、检查selinux是否关闭
[root@master1 ~]# getenforce
Disabled #显示Disabled说明selinux已经关闭,如果是
如果未关闭,需要执行命令修改:sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
注意:修改selinux配置文件之后,重启机器,selinux配置才会生效
5、配置主机hosts文件,相互之间通过主机名互相访问,修改每台机器的/etc/hosts文件,增加如下三行:
192.168.40.180 master1
192.168.40.181 master2
192.168.40.182 node1

6、配置主机间免密码登陆
a、在master1主机执行:[root@master1 ~]# ssh-keygen -t rsa
[root@master1 ~]# ssh-copy-id master1
[root@master1 ~]# ssh-copy-id master2
[root@master1 ~]# ssh-copy-id node1
在master2主机执行:[root@master2 ~]# ssh-keygen -t rsa
[root@master2 ~]# ssh-copy-id master2
[root@master2 ~]# ssh-copy-id master1
[root@master2 ~]# ssh-copy-id node1
在node1主机执行:[root@node1 ~]#ssh-keygen -t rsa
[root@node1 ~]# ssh-copy-id node1
[root@node1 ~]# ssh-copy-id master1
[root@node1 ~]# ssh-copy-id master2

测试 :ssh + 主机名[root@master2 ~]# ssh master1Last login: Fri Oct 13 21:16:02 2023 from node1[root@master1 ~]# 

7、关闭交换分区:
a、#临时关闭 swapoff -a
b、直接注释掉/etc/fstab 的交换分区挂载,重启后自动生效
问题:为什么要关闭swap交换分区?
答: Swap的交换分区是硬盘划分出来的,当如果机器内存不够,会使用swap分区,但是swap分区的性能相对内存要低很多,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就会导致初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决。

8、修改机器内核参数
[root@master1 ~]# modprobe br_netfilter
[root@master1 ~]# cat /etc/profile
[root@master1 ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@master1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@master1 ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

同理其他两台主机操作类似,

加载内核配置文件,
[root@master1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

9、关闭firewalld防火墙并设置为不再随机启动:systemctl stop firewalld ; systemctl disable firewalld

10、配置阿里云的repo源
安装rzsz命令:[root@master1 ~]# yum install lrzsz -y
安装scp:[root@master1 ~]# yum install openssh-clients -y

备份基础repo源

[root@master1 ~]# cd /etc/yum.repos.d/
[root@master1 yum.repos.d]# mkdir /root/repo.bak
[root@master1 yum.repos.d]# mv * /root/repo.bak/

下载阿里云的repo源

把资料包里的CentOS-Base.repo和epel.repo文件上传到master1主机的/etc/yum.repos.d/目录下
或者从网上下载。
master2和node1节点的配置可以采用类似操作。

也可以备份好原yum源,删除旧yum文件,直接从master1上复制到master2和node1上
[root@master1 yum.repos.d]# scp CentOS-Base.repo epel.repo master2:/etc/yum.repos.d
CentOS-Base.repo 100% 2523 3.1MB/s 00:00
epel.repo 100% 1050 1.2MB/s 00:00
[root@master1 yum.repos.d]# scp CentOS-Base.repo epel.repo node1:/etc/yum.repos.d
CentOS-Base.repo 100% 2523 1.5MB/s 00:00
epel.repo

配置国内阿里云docker的repo源

1、安装yum-utils ,命令:yum install yum-utils -y.=
2、下载安装docker的repo源 :yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master1 yum.repos.d]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
3、配置安装k8s组件需要的阿里云的repo源

11、配置时间同步

(1、#安装时间同步命令ntpdate
[root@master1 yum.repos.d]# yum install ntpdate -y
(2、执行同步命令:[root@master1 yum.repos.d]# ntpdate cn.pool.ntp.org
14 Oct 00:35:14 ntpdate[9954]: no server suitable for synchronization found
(3、编写计划任务,每隔一小时同步一次(所有主机节点一样的规则)
[root@master1 yum.repos.d]# crontab -e
[root@master1 yum.repos.d]# crontab -l

  • /1 /usr/sbin/ntpdate cn.pool.ntp.org
    (4、#重启crond服务
    [root@node1 ~]#service crond restart

12 安装基础软件包
三台主机可以时执行:
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm

遇到问题:
警告:/var/cache/yum/x86_64/7/epel/packages/epel-release-7-14.noarch.rpm: 头V4 RSA/SHA256 Signature, 密钥 ID 352c64e5: NOKEY
从 file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 检索密钥
获取 GPG 密钥失败:[Errno 14] curl#37 - "Couldn't open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7"
解决办法:缺失RPM-GPG-KEY-EPEL-7,进入目录直接wget下载
cd /etc/pki/rpm-gpg
wget https://archive.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
需要执行清理命令:yum clean all
然后就可以正常安装了。

13、安装docker服务,docker-ce的20.10.6版本

安装命令:yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io -y
启动docker,设置为开机启动并查看当前docker情况:systemctl start docker && systemctl enable docker && systemctl status docker

14、配置docker镜像加速器和驱动
[root@master1 ~]# vim /etc/docker/daemon.json

{
"registry-mirrors":["https://w70c62mv.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

保存退出来,scp命令复制到master2和node1,
scp daemon.json master2:/etc/docker
scp daemon.json node1:/etc/docker

修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以。

[root@master1 ~]# systemctl daemon-reload&&systemctl restart docker[root@master1 ~]# systemctl status docker

14、安装初始化k8s需要的软件包
发送安装命令到三台机器 ,
yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@master1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@master2 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@node1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

注:每个软件包的作用Kubeadm: kubeadm是一个工具,用来初始化k8s集群的kubelet: 安装在集群所有节点上,用于启动Pod的kubectl: 通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

15、通过keepalive+nginx实现k8s apiserver节点高可用
1、安装nginx主备:
在master1和master2上做nginx主备安装
[root@master1 ~]# yum install nginx keepalived -y
[root@master2 ~]# yum install nginx keepalived -y

2、修改nginx配置文件。主备的配置需要保持一致 vim /etc/nginx/nginx.com 增加: #四层负载均衡,为两台Master Apiserver组件提供负载均衡

stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.lgo main;
upstream k8s-apiserver {
server 192.168.40.180:6443 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.40.181:6443 weight=5 max_fails=3 fail_timeout=30s;
}
server{

操作如下,也可以直接备份原配置文件,然后rz上传本地写好的配置文件。
[root@master1 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

四层负载均衡,为两台Master apiserver组件提供负载均衡

stream {

log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log /var/log/nginx/k8s-access.log main;upstream k8s-apiserver { server 192.168.40.180:6443 weight=5 max_fails=3 fail_timeout=30s; server 192.168.40.181:6443 weight=5 max_fails=3 fail_timeout=30s;}server { listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突 proxy_pass k8s-apiserver;}

}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;sendfile on;tcp_nopush on;tcp_nodelay on;keepalive_timeout 65;types_hash_max_size 2048;include /etc/nginx/mime.types;default_type application/octet-stream;server { listen 80 default_server; server_name _; location / { }}

}
listen 16443;#由于nginx与master节点复用,这个监听端口不能是6443,否则会有冲突
proxy_pass k8s-apiserver;
}
}

master2也采用相同操作。

3)、keepalived配置 a、主keepalived[root@master1 ~]# vim /etc/keepalived/keepalived.conf global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER} vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh"}vrrp_instance VI_1 { state MASTER interface ens33 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 100 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } # 虚拟IP virtual_ipaddress { 192.168.40.199/24 } track_script { check_nginx } }#vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)#virtual_ipaddress:虚拟IP(VIP)

[root@master1 ~]# vim /etc/keepalived/check_nginx.sh

!/bin/bash

1、判断Nginx是否存活

counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
    #2、如果不存活则尝试启动Nginx
    service nginx start
    sleep 2
    #3、等待2秒后再次获取一次Nginx状态
    counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
    #4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
    if [ $counter -eq 0 ]; then
        service  keepalived stop
    fi
fi

[root@master1 ~]# chmod +x /etc/keepalived/check_nginx.sh

master2采用相同的操作,配置文件保存一致。
4、启动服务:
启动之前先安装nginx-stream模块
[root@master1 ~]# yum install nginx-mod-stream -y
[root@master1 ~]# systemctl daemon-reload

启动nginx和keepalived
[root@master1 ~]# systemctl start nginx keepalived&&systemctl enable nginx keepalived
查看状况
[root@master1 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since 六 2023-10-14 10:59:10 CST; 16s ago
Main PID: 19607 (keepalived)
CGroup: /system.slice/keepalived.service
├─19607 /usr/sbin/keepalived -D
├─19608 /usr/sbin/keepalived -D
└─19609 /usr/sbin/keepalived -D
5、测试vip是否绑定成功
[root@master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:81:87:5a brd ff:ff:ff:ff:ff:ff
inet 192.168.40.180/24 brd 192.168.40.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.40.199/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::aaff:e4a0:d160:38d3/64 scope link noprefixroute
valid_lft forever preferred_lft forever
6、测试keepalived:
停掉master1上的keepalived,Vip会漂移到master2
[root@master1 ~]# service keepalived stop
[root@master2]# ip addr

16、kubeadm初始化k8s集群
a、在master1节点上创建kubeadm-config.yml文件
[root@master1 ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 192.168.40.199:16443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
certSANs:

  • 192.168.40.180
  • 192.168.40.181
  • 192.168.40.182
  • 192.168.40.199
    networking:
    podSubnet: 10.244.0.0/16
    serviceSubnet: 10.96.0.0/16

     apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs

    b、把初始化k8s集群需要的离线镜像包k8simage-1-20-6.tar.gz上传到master1、master2、node1机器上,手动解压:
    [root@master1 ~]# docker load -i k8simage-1-20-6.tar.gz
    [root@master2 ~]# docker load -i k8simage-1-20-6.tar.gz
    [root@node1 ~]# docker load -i k8simage-1-20-6.tar.gz
    初始化k8s集群命令:
    [root@master1]# kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerification
    特别提醒:--image-repository registry.aliyuncs.com/google_containers为保证拉取镜像不到国外站点拉取,手动指定仓库地址为registry.aliyuncs.com/google_containers。kubeadm默认从k8s.gcr.io拉取镜像。 我们本地有导入到的离线镜像,所以会优先使用本地的镜像。
    mode: ipvs 表示kube-proxy代理模式是ipvs,如果不指定ipvs,会默认使用iptables,但是iptables效率低,所以我们生产环境建议开启ipvs,阿里云和华为云托管的K8s,也提供ipvs模式。

    安装成功最后有如下提示:
    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 192.168.40.199:16443 --token qnmgnl.mk5nisfwa4lbzsc4 \
    --discovery-token-ca-cert-hash sha256:99902cb959dda1bb32061bedcc364233a6cc5091e0c5c0832277a44f31abc74f

    配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理
    [root@master1 ~]# mkdir -p $HOME/.kube
    [root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
    查看安装状态:
    [root@master1 ~]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master1 NotReady control-plane,master 5m45s v1.20.6

    此时集群状态还是NotReady状态,是因为还没有安装网络插件

17、扩容k8s集群-添加master节点

把master1节点的证书拷贝到master2上,

在master2创建证书存放目录:
[root@master2 ~]# cd /root/&&mkdir -p /etc/kubernetes/pki/etcd&&mkdir -p ~/.kube/

返回master1节点上,把节点证书传一份给master2上。
[root@master1 pki]# scp ca.crt ca.key master2:/etc/kubernetes/pki/
[root@master1 pki]# scp sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key master2:/etc/kubernetes/pki/
[root@master1 pki]# scp ./etcd/ca.crt master2:/etc/kubernetes/pki/etcd/
[root@master1 pki]# scp ./etcd/ca.key master2:/etc/kubernetes/pki/etcd/

然后返回master1上获取加入集群命令 ,执行:kubeadm token create --print-join-command
[root@master1 pki]# kubeadm token create --print-join-command
kubeadm join 192.168.40.199:16443 --token 5u8ixe.x8kcchoipnuoqtt6 --discovery-token-ca-cert-hash sha256:99902cb959dda1bb32061bedcc364233a6cc5091e0c5c0832277a44f31abc74f

然后在master2上执行:(管理节点 加上 --control-plane )
kubeadm join 192.168.40.199:16443 --token 5u8ixe.x8kcchoipnuoqtt6 --discovery-token-ca-cert-hash sha256:99902cb959dda1bb32061bedcc364233a6cc5091e0c5c0832277a44f31abc74f --control-plane --ignore-preflight-errors=SystemVerification
然后在node1上执行:
kubeadm join 192.168.40.199:16443 --token 5u8ixe.x8kcchoipnuoqtt6 --discovery-token-ca-cert-hash sha256:99902cb959dda1bb32061bedcc364233a6cc5091e0c5c0832277a44f31abc74f --ignore-preflight-errors=SystemVerification

返回master1查看集群状况:kubectl get nodes
[root@master1 pki]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane,master 96m v1.20.6
master2 NotReady control-plane,master 37s v1.20.6
node1 NotReady 11m v1.20.6

可以看到node1的ROLES角色为空,就表示这个节点是工作节点。

可以把node1的ROLES变成work,按照如下方法:

[root@master1 ~]# kubectl label node node1 node-role.kubernetes.io/worker=worker
[root@master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane,master 111m v1.20.6
master2 NotReady control-plane,master 15m v1.20.6
node1 NotReady worker 26m v1.20.6

注意:上面可以看出集群主机的状态都是NotReady状态,说明没有安装网络插件

[root@master1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7f89b7bc75-dqrxj 0/1 Pending 0 112m
coredns-7f89b7bc75-qzc9p 0/1 Pending 0 112m
etcd-master1 1/1 Running 0 112m
etcd-master2 1/1 Running 0 16m
kube-apiserver-master1 1/1 Running 0 112m
kube-apiserver-master2 1/1 Running 0 16m
kube-controller-manager-master1 1/1 Running 1 112m
kube-controller-manager-master2 1/1 Running 0 16m
kube-proxy-dh22b 1/1 Running 0 112m
kube-proxy-mp5xm 1/1 Running 0 27m
kube-proxy-rp972 1/1 Running 0 16m
kube-scheduler-master1 1/1 Running 1 112m
kube-scheduler-master2 1/1 Running 0 16m

18、安装kubernetes网络组件-Calico
上传calico.yaml到master1上,使用yaml文件安装calico 网络插件
安装命令:[root@master1 ~]# kubectl apply -f calico.yaml
查询运行状况:
[root@master1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6949477b58-nzh84 1/1 Running 0 70s
calico-node-cggnz 1/1 Running 0 70s
calico-node-fm7rv 1/1 Running 0 70s
calico-node-k28fk 1/1 Running 0 70s
coredns-7f89b7bc75-dqrxj 1/1 Running 0 117m
coredns-7f89b7bc75-qzc9p 1/1 Running 0 117m
etcd-master1 1/1 Running 0 117m
etcd-master2 1/1 Running 0 21m
kube-apiserver-master1 1/1 Running 0 117m
kube-apiserver-master2 1/1 Running 0 21m
kube-controller-manager-master1 1/1 Running 1 117m
kube-controller-manager-master2 1/1 Running 0 21m
kube-proxy-dh22b 1/1 Running 0 117m
kube-proxy-mp5xm 1/1 Running 0 32m
kube-proxy-rp972 1/1 Running 0 21m
kube-scheduler-master1 1/1 Running 1 117m
kube-scheduler-master2 1/1 Running 0 21m
查看集群运行状况
[root@master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 118m v1.20.6
master2 Ready control-plane,master 21m v1.20.6
node1 Ready worker 32m v1.20.6

19、测试在k8s创建pod是否可以正常访问网络

把busybox-1-28.tar.gz上传到node1节点,手动解压

[root@node1 ~]# docker load -i busybox-1-28.tar.gz
在master1节点上执行:kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
进入容器测试网络:
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.18): 56 data bytes
64 bytes from 39.156.66.18: seq=0 ttl=127 time=39.3 ms

通过上面可以看到能访问网络,说明calico网络插件已经被正常安装了

20 、测试coredns是否正常

[root@master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ #

21、延长k8s证书
查看证书有效时间:
openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text |grep Not
显示如下,通过下面可看到ca证书有效期是10年
[root@master1 ~]# openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text |grep Not
Not Before: Oct 14 04:45:12 2023 GMT
Not After : Oct 11 04:45:12 2033 GMT
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep Not
显示如下,通过下面可看到apiserver证书有效期是1年
[root@master1 ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep Not
Not Before: Oct 14 04:45:12 2023 GMT
Not After : Oct 13 04:45:12 2024 GMT

 把资料包里的update-kubeadm-cert.sh文件上传到master1和master2节点,分别执行如下操作:1)给update-kubeadm-cert.sh证书授权可执行权限[root@master1~]#chmod +x update-kubeadm-cert.sh2)执行下面命令,修改证书过期时间,把时间延长到10年[root@master1 ~]# ./update-kubeadm-cert.sh all3)给update-kubeadm-cert.sh证书授权可执行权限[root@master2~]#chmod +x update-kubeadm-cert.sh4)执行下面命令,修改证书过期时间,把时间延长到10年[root@master2 ~]# ./update-kubeadm-cert.sh all

3)在master1节点查询Pod是否正常,能查询出数据说明证书签发完成
kubectl get pods -n kube-system

显示如下,能够看到pod信息,说明证书签发正常:

评论一下?

OωO
取消