前言:

在写这篇文章之前我们先来了解一下Kubenetes、docker和containerd的爱恨情仇吧

首先我们要知道他们各自是干什么的?

docker:一个完整的容器平台,用来构建镜像、运行容器、打包发布等。

containerd:是docker拆出来专注于容器运行的底层引擎

kubenetes: 容器的编排工具,负责容器的调度,伸缩副本,服务发现等。

通俗的来讲:docker是属于造车(打包镜像),发车(运行容器),打广告(推送镜像),containerd是只负责‘发车’的底层引擎,不负责造车和打广告,而Kubenetes 是属于交通指挥官,告诉你哪台车(容器)该在哪条路(节点)开、要多少辆等。

关键发展点:最早容器开发都是基于 docker,K8s 也用 docker 来启动容器,后来docker 公司把 containerd 拆分出来捐给 CNCF,成为 专门负责容器运行的“轻量引擎”。到了 K8s v1.20 开始宣布弃用 Dockershim,从 K8s v1.24 开始,k8s移除了dockershim,全面切换 containerd / CRI-O,增加插件镜像拉取策略。

内容概述:

  1. 环境准备

  2. 容器运行时Containerd准备

  3. Kubenetes集群部署

1. 环境准备

1.1 操作系统说明

Ubuntu 22.04

1.2 主机硬件配置

CPU

内存

角色

主机名

2C

8G

master

k8s-master01

2C

8G

worker

k8s-worker01

2C

8G

worker

k8s-worker02

1.3 主机配置

  1. 主机名配置

本次使用3台主机完成kubenetes的部署,其中1台为master节点,名称为k8s-master01;其中2台为worker节点,名称分别为:k8s-worker01及k8s-worker02

## master 节点
hostnamectl set-hostname k8s-master
## work01 节点
hostnamectl set-hostname k8s-node01
## work02 节点
hostnamectl set-hostname k8s-node02
  1. 主机ip地址配置

## k8s-master,IP地址为192.168.0.160
cat /etc/netplan/50-cloud-init.yaml 
network:
  ethernets:
    ens33:
      addresses:
      - 192.168.0.160/24
      routes:
        - to: default
          via: 192.168.0.1
      nameservers:
        addresses:
        - 223.5.5.5
        - 114.114.114.114
  version: 2

## k8s-node01,IP地址为192.168.0.161
cat /etc/netplan/50-cloud-init.yaml 
network:
  ethernets:
    ens33:
      addresses:
      - 192.168.0.161/24
      routes:
        - to: default
          via: 192.168.0.1
      nameservers:
        addresses:
        - 223.5.5.5
        - 114.114.114.114
  version: 2

## k8s-node02,IP地址为192.168.0.162
cat /etc/netplan/50-cloud-init.yaml 
network:
  ethernets:
    ens33:
      addresses:
      - 192.168.0.162/24
      routes:
        - to: default
          via: 192.168.0.1
      nameservers:
        addresses:
        - 223.5.5.5
        - 114.114.114.114
  version: 2
  1. 主机名和IP地址的解析

所有集群主机均需要配置

vim /etc/hosts
192.168.0.160 k8s-master01
192.168.0.161 k8s-worker01
192.168.0.162 k8s-worker02
  1. 防火墙配置

集群主机均需要配置

ufw disable 
Firewall stopped and disabled on system startup
  1. 关闭SELINUX

集群主机均需要配置

setenforce 0
  1. 时间同步配置

所有主机均需要操作。最小化安装系统需要安装ntpdate软件

## crontab -l 
*/2 * * * *  /sbin/ntpdate ntp1.aliyun.com  &>/dev/null
  1. 配置内核转发和网桥过滤

所有主机均需要操作。

## 添加网桥过滤及内核转发配置文件
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
## 应用 sysctl 参数而不重新启动 
sudo sysctl --system                      
## 查看是否加载
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
  1. 关闭swap分区(临时)

所有主机均需要操作。

swapoff -a
## 永久修改
vim /etc/fstab

2. 容器运行时Containerd

2.1 Containerd准备

  1. Containerd部署文件的获取

wget https://github.com/containerd/containerd/releases/download/v1.7.0/cri-containerd-cni-1.7.0-linux-amd64.tar.gz
## 上传解压
tar xf cri-containerd-cni-1.7.0-linux-amd64.tar.gz  -C / 
## 验证
containerd --version
  1. Containerd配置文件生成并修改

## 创建目录 
mkdir /etc/containerd
## 配置文件生成并修改
containerd config default > /etc/containerd/config.toml
vim /etc/containerd/config.toml
sandbox_image = "registry.k8s.io/pause:3.8"  ##值修改为registry.aliyuncs.com/google_containers/pause:3.9
...
SystemdCgroup = true
  1. Containerd启动并开机自启

systemctl enable --now containerd

2.2 runc准备

  1. runc准备

wget https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64

install -m 755 runc.amd64 /usr/local/sbin/runc
## 执行runc命令,如果有命令帮助则为正常
runc

3. Kubenetes 集群部署

  1. Kubenetes集群软件镜像源准备

## 这里使用阿里镜像源,当然也可以自行修改
apt-get update && apt-get install -y apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/Release.key |
    gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/ /" |
    tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
  1. Kubenetes集群软件安装

## 查看可用的1.28的版本
apt-cache madison kubeadm | grep 1.28

## 安装指定版本
apt-get install -y kubelet=1.28.0-1.1  kubeadm=1.28.0-1.1  kubectl
  1. 使用kubeadm初始化集群

kubeadm init --kubernetes-version=v1.28.0  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.160   --image-repository=registry.aliyuncs.com/google_containers  --cri-socket=unix:///run/containerd/containerd.sock

--- 相关参数说明
kubernetes-version:
  指定K8S的版本信息
pod-network-cidr:
  指定pod网段的地址
apiserver-advertise-address:
  指定apiserver的地址
image-repository:
  指定镜像仓库
cri-socket:
  指定容器运行时的套接字文件
  1. 拷贝授权文件,用于管理K8S集群

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 安装网络插件(flannel)

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

## 修改flannel的pod网段地址要和初始化的一致
[root@k8s-master ~]# egrep Network kube-flannel.yml 
      "Network": "10.244.0.0/16",
      hostNetwork: true
  1. 添加kubectl的自动补全功能

echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc
  1. node节点加入master节点

## 在 master 节点上打印token的值
kubeadm token create --print-join-command

4. Containerd 配置私有镜像仓库(http)

修改/etc/containerd/config.toml中如下位置,默认配置为:

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

列如我要添加192.168.0.77:32237的私有仓库有地址

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.0.77:32237".tls]
          insecure_skip_verify = true
      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.0.77:32237"]
          endpoint = ["http://192.168.0.77:32237"]

重启containerd

5. 补充

但是在真正的企业中,还是不太建议使用containerd来作为容器运行时,不如它不支持打包镜像或者推送镜像等等等,生产我们还是更加希望通过docker来管理pod,比如使用docker logs,docker ps ,docker build来打包镜像。因此我们也可以直接安装docker就可以了,因为新版的docker 会自己带上containerd,总结出一句话就是containerd 是机器的语言,docker 是人的语言,Kubernetes 跟机器说话,你跟 docker 说话,中间通过cri-docker 连接。

5.1 安装docker

配置文件

[root@k8s-master ~]# cat /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": [
    "https://registry.docker-cn.com",
    "https://docker.mirrors.ustc.edu.cn",
    "https://hub-mirror.c.163.com",
    "https://mirror.ccs.tencentyun.com",
    "https://registry.aliyuncs.com"
  ],
  "insecure-registries": ["192.168.0.77:32237"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}

5.2 安装cri-docker

优先选 最新 release,而不是“教程里写死的版本”

[root@k8s-master ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd-0.3.16.amd64.tgz
[root@k8s-master ~]# tar -xvf cri-dockerd-0.3.16.amd64.tgz
[root@k8s-master ~]# mv cri-dockerd/cri-dockerd /usr/bin/
[root@k8s-master ~]# chmod +x /usr/bin/cri-dockerd

写cri-docker.service 的systemd的服务

[root@k8s-master ~]# cat /etc/systemd/system/cri-docker.service 
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target docker.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint=fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

写cri-docker.socket 的systemd的服务

[root@k8s-master ~]# cat /etc/systemd/system/cri-docker.socket 
[Unit]
Description=CRI Docker Socket for the API

[Socket]
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
[root@k8s-master ~]# systemctl  daemon-reload
[root@k8s-master ~]# systemctl  start cri-docker.socket 
[root@k8s-master ~]# systemctl  start  cri-docker.service 
[root@k8s-master ~]# systemctl  status  cri-docker.service 

5.3 初始化集群

cri-socket 指定cri的参数确保/var/run/cri-dockerd.sock 套接字存在呦

[root@k8s-master ~]# kubeadm init   --kubernetes-version=v1.28.0   --pod-network-cidr=10.244.0.0/16   --apiserver-advertise-address=192.168.0.160   --image-repository=registry.aliyuncs.com/google_containers   --cri-socket=unix:///var/run/cri-dockerd.sock   --v=5

如果卡住的话

## 大概位置
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

可能是镜像拉取超时了,用crictl ps -a 看一下如果报错的话执行以下命令

[root@k8s-master ~]# cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
timeout: 30
debug: false
EOF

强制kubelet使用阿里云镜像

[root@k8s-master ~]# mkdir -p /etc/systemd/system/kubelet.service.d
[root@k8s-master ~]# cat <<EOF | sudo tee /etc/systemd/system/kubelet.service.d/99-pause.conf
[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10"
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart kubelet